Test Report: KVM_Linux_crio 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (41/228)

Order failed test Duration
31 TestAddons/serial/GCPAuth/Namespaces 0
33 TestAddons/parallel/Registry 12.86
34 TestAddons/parallel/Ingress 2.03
36 TestAddons/parallel/MetricsServer 316.03
37 TestAddons/parallel/HelmTiller 100.79
39 TestAddons/parallel/CSI 362.04
42 TestAddons/parallel/LocalPath 0
44 TestAddons/parallel/Yakd 122.28
46 TestCertOptions 48.04
68 TestFunctional/serial/KubeContext 2.02
69 TestFunctional/serial/KubectlGetPods 1.94
82 TestFunctional/serial/ComponentHealth 1.95
85 TestFunctional/serial/InvalidService 0
88 TestFunctional/parallel/DashboardCmd 5.49
95 TestFunctional/parallel/ServiceCmdConnect 2.46
97 TestFunctional/parallel/PersistentVolumeClaim 103.01
101 TestFunctional/parallel/MySQL 3.02
107 TestFunctional/parallel/NodeLabels 2.41
112 TestFunctional/parallel/ServiceCmd/DeployApp 0
113 TestFunctional/parallel/ServiceCmd/List 0.28
114 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
118 TestFunctional/parallel/ServiceCmd/Format 0.28
119 TestFunctional/parallel/MountCmd/any-port 2.13
121 TestFunctional/parallel/ServiceCmd/URL 0.27
161 TestMultiControlPlane/serial/NodeLabels 2.4
164 TestMultiControlPlane/serial/StopSecondaryNode 141.78
166 TestMultiControlPlane/serial/RestartSecondaryNode 55.52
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 402.9
169 TestMultiControlPlane/serial/DeleteSecondaryNode 19.03
171 TestMultiControlPlane/serial/StopCluster 141.66
172 TestMultiControlPlane/serial/RestartCluster 271.47
226 TestMultiNode/serial/MultiNodeLabels 2.21
230 TestMultiNode/serial/StartAfterStop 41.55
231 TestMultiNode/serial/RestartKeepsNodes 318.62
232 TestMultiNode/serial/DeleteNode 4.07
233 TestMultiNode/serial/StopMultiNode 141.42
234 TestMultiNode/serial/RestartMultiNode 188.29
240 TestPreload 267.99
248 TestKubernetesUpgrade 393.19
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 7200.059
x
+
TestAddons/serial/GCPAuth/Namespaces (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-001438 create ns new-namespace
addons_test.go:656: (dbg) Non-zero exit: kubectl --context addons-001438 create ns new-namespace: fork/exec /usr/local/bin/kubectl: exec format error (386.272µs)
addons_test.go:658: kubectl --context addons-001438 create ns new-namespace failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/serial/GCPAuth/Namespaces (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 13.387597ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00359158s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003886217s
addons_test.go:342: (dbg) Run:  kubectl --context addons-001438 delete po -l run=registry-test --now
addons_test.go:342: (dbg) Non-zero exit: kubectl --context addons-001438 delete po -l run=registry-test --now: fork/exec /usr/local/bin/kubectl: exec format error (336.055µs)
addons_test.go:344: pre-cleanup kubectl --context addons-001438 delete po -l run=registry-test --now failed: fork/exec /usr/local/bin/kubectl: exec format error (not a problem)
addons_test.go:347: (dbg) Run:  kubectl --context addons-001438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": fork/exec /usr/local/bin/kubectl: exec format error (249.928µs)
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-001438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 ip
2024/09/16 10:25:18 [DEBUG] GET http://192.168.39.72:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-001438 -n addons-001438
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 logs -n 25: (1.422375098s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | --download-only -p                   | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-928489                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42715               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-928489              | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                  | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| start   | -p addons-001438 --wait=true         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	| ip      | addons-001438 ip                     | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:42.990297   12265 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:42.990427   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990438   12265 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:42.990444   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990619   12265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:42.991237   12265 out.go:352] Setting JSON to false
	I0916 10:21:42.992075   12265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":253,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:42.992165   12265 start.go:139] virtualization: kvm guest
	I0916 10:21:42.994057   12265 out.go:177] * [addons-001438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:42.995363   12265 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:21:42.995366   12265 notify.go:220] Checking for updates...
	I0916 10:21:42.996620   12265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:42.997884   12265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:42.999244   12265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.000448   12265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:21:43.001744   12265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:21:43.003140   12265 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:43.035292   12265 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:21:43.036591   12265 start.go:297] selected driver: kvm2
	I0916 10:21:43.036604   12265 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:43.036617   12265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:21:43.037618   12265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.037687   12265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:43.052612   12265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:43.052654   12265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:43.052880   12265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:21:43.052910   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:21:43.052948   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:43.052956   12265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:43.053000   12265 start.go:340] cluster config:
	{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:43.053089   12265 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.054779   12265 out.go:177] * Starting "addons-001438" primary control-plane node in "addons-001438" cluster
	I0916 10:21:43.056048   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:21:43.056073   12265 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:43.056099   12265 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:43.056171   12265 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:21:43.056181   12265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:21:43.056464   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:21:43.056479   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json: {Name:mke7feffe145119f1110e818375562c2195d4fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:21:43.056601   12265 start.go:360] acquireMachinesLock for addons-001438: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:21:43.056638   12265 start.go:364] duration metric: took 25.099µs to acquireMachinesLock for "addons-001438"
	I0916 10:21:43.056653   12265 start.go:93] Provisioning new machine with config: &{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:21:43.056703   12265 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:21:43.058226   12265 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:21:43.058340   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:21:43.058376   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:21:43.072993   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0916 10:21:43.073475   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:21:43.073995   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:21:43.074020   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:21:43.074422   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:21:43.074620   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:21:43.074787   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:21:43.074946   12265 start.go:159] libmachine.API.Create for "addons-001438" (driver="kvm2")
	I0916 10:21:43.074989   12265 client.go:168] LocalClient.Create starting
	I0916 10:21:43.075021   12265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:21:43.311518   12265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:21:43.475888   12265 main.go:141] libmachine: Running pre-create checks...
	I0916 10:21:43.475917   12265 main.go:141] libmachine: (addons-001438) Calling .PreCreateCheck
	I0916 10:21:43.476396   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:21:43.476796   12265 main.go:141] libmachine: Creating machine...
	I0916 10:21:43.476809   12265 main.go:141] libmachine: (addons-001438) Calling .Create
	I0916 10:21:43.476954   12265 main.go:141] libmachine: (addons-001438) Creating KVM machine...
	I0916 10:21:43.478137   12265 main.go:141] libmachine: (addons-001438) DBG | found existing default KVM network
	I0916 10:21:43.478893   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.478751   12287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0916 10:21:43.478937   12265 main.go:141] libmachine: (addons-001438) DBG | created network xml: 
	I0916 10:21:43.478958   12265 main.go:141] libmachine: (addons-001438) DBG | <network>
	I0916 10:21:43.478967   12265 main.go:141] libmachine: (addons-001438) DBG |   <name>mk-addons-001438</name>
	I0916 10:21:43.478974   12265 main.go:141] libmachine: (addons-001438) DBG |   <dns enable='no'/>
	I0916 10:21:43.478986   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.478998   12265 main.go:141] libmachine: (addons-001438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:21:43.479006   12265 main.go:141] libmachine: (addons-001438) DBG |     <dhcp>
	I0916 10:21:43.479018   12265 main.go:141] libmachine: (addons-001438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:21:43.479026   12265 main.go:141] libmachine: (addons-001438) DBG |     </dhcp>
	I0916 10:21:43.479036   12265 main.go:141] libmachine: (addons-001438) DBG |   </ip>
	I0916 10:21:43.479087   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.479109   12265 main.go:141] libmachine: (addons-001438) DBG | </network>
	I0916 10:21:43.479150   12265 main.go:141] libmachine: (addons-001438) DBG | 
	I0916 10:21:43.484546   12265 main.go:141] libmachine: (addons-001438) DBG | trying to create private KVM network mk-addons-001438 192.168.39.0/24...
	I0916 10:21:43.547822   12265 main.go:141] libmachine: (addons-001438) DBG | private KVM network mk-addons-001438 192.168.39.0/24 created
	I0916 10:21:43.547845   12265 main.go:141] libmachine: (addons-001438) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.547862   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.547813   12287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.547875   12265 main.go:141] libmachine: (addons-001438) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:43.547936   12265 main.go:141] libmachine: (addons-001438) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:21:43.797047   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.796916   12287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa...
	I0916 10:21:43.906021   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.905909   12287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk...
	I0916 10:21:43.906051   12265 main.go:141] libmachine: (addons-001438) DBG | Writing magic tar header
	I0916 10:21:43.906060   12265 main.go:141] libmachine: (addons-001438) DBG | Writing SSH key tar header
	I0916 10:21:43.906067   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.906027   12287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.906123   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438
	I0916 10:21:43.906172   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 (perms=drwx------)
	I0916 10:21:43.906194   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:21:43.906204   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:21:43.906222   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:21:43.906230   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.906236   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:21:43.906243   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:21:43.906248   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:21:43.906258   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:43.906264   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:21:43.906275   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:21:43.906309   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:21:43.906325   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home
	I0916 10:21:43.906338   12265 main.go:141] libmachine: (addons-001438) DBG | Skipping /home - not owner
	I0916 10:21:43.907204   12265 main.go:141] libmachine: (addons-001438) define libvirt domain using xml: 
	I0916 10:21:43.907223   12265 main.go:141] libmachine: (addons-001438) <domain type='kvm'>
	I0916 10:21:43.907235   12265 main.go:141] libmachine: (addons-001438)   <name>addons-001438</name>
	I0916 10:21:43.907246   12265 main.go:141] libmachine: (addons-001438)   <memory unit='MiB'>4000</memory>
	I0916 10:21:43.907255   12265 main.go:141] libmachine: (addons-001438)   <vcpu>2</vcpu>
	I0916 10:21:43.907265   12265 main.go:141] libmachine: (addons-001438)   <features>
	I0916 10:21:43.907274   12265 main.go:141] libmachine: (addons-001438)     <acpi/>
	I0916 10:21:43.907282   12265 main.go:141] libmachine: (addons-001438)     <apic/>
	I0916 10:21:43.907294   12265 main.go:141] libmachine: (addons-001438)     <pae/>
	I0916 10:21:43.907307   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907318   12265 main.go:141] libmachine: (addons-001438)   </features>
	I0916 10:21:43.907327   12265 main.go:141] libmachine: (addons-001438)   <cpu mode='host-passthrough'>
	I0916 10:21:43.907337   12265 main.go:141] libmachine: (addons-001438)   
	I0916 10:21:43.907349   12265 main.go:141] libmachine: (addons-001438)   </cpu>
	I0916 10:21:43.907364   12265 main.go:141] libmachine: (addons-001438)   <os>
	I0916 10:21:43.907373   12265 main.go:141] libmachine: (addons-001438)     <type>hvm</type>
	I0916 10:21:43.907383   12265 main.go:141] libmachine: (addons-001438)     <boot dev='cdrom'/>
	I0916 10:21:43.907392   12265 main.go:141] libmachine: (addons-001438)     <boot dev='hd'/>
	I0916 10:21:43.907402   12265 main.go:141] libmachine: (addons-001438)     <bootmenu enable='no'/>
	I0916 10:21:43.907415   12265 main.go:141] libmachine: (addons-001438)   </os>
	I0916 10:21:43.907427   12265 main.go:141] libmachine: (addons-001438)   <devices>
	I0916 10:21:43.907435   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='cdrom'>
	I0916 10:21:43.907452   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/boot2docker.iso'/>
	I0916 10:21:43.907463   12265 main.go:141] libmachine: (addons-001438)       <target dev='hdc' bus='scsi'/>
	I0916 10:21:43.907489   12265 main.go:141] libmachine: (addons-001438)       <readonly/>
	I0916 10:21:43.907508   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907518   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='disk'>
	I0916 10:21:43.907531   12265 main.go:141] libmachine: (addons-001438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:21:43.907547   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk'/>
	I0916 10:21:43.907558   12265 main.go:141] libmachine: (addons-001438)       <target dev='hda' bus='virtio'/>
	I0916 10:21:43.907568   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907583   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907595   12265 main.go:141] libmachine: (addons-001438)       <source network='mk-addons-001438'/>
	I0916 10:21:43.907606   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907616   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907624   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907634   12265 main.go:141] libmachine: (addons-001438)       <source network='default'/>
	I0916 10:21:43.907645   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907667   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907687   12265 main.go:141] libmachine: (addons-001438)     <serial type='pty'>
	I0916 10:21:43.907697   12265 main.go:141] libmachine: (addons-001438)       <target port='0'/>
	I0916 10:21:43.907706   12265 main.go:141] libmachine: (addons-001438)     </serial>
	I0916 10:21:43.907717   12265 main.go:141] libmachine: (addons-001438)     <console type='pty'>
	I0916 10:21:43.907735   12265 main.go:141] libmachine: (addons-001438)       <target type='serial' port='0'/>
	I0916 10:21:43.907745   12265 main.go:141] libmachine: (addons-001438)     </console>
	I0916 10:21:43.907758   12265 main.go:141] libmachine: (addons-001438)     <rng model='virtio'>
	I0916 10:21:43.907772   12265 main.go:141] libmachine: (addons-001438)       <backend model='random'>/dev/random</backend>
	I0916 10:21:43.907777   12265 main.go:141] libmachine: (addons-001438)     </rng>
	I0916 10:21:43.907785   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907794   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907804   12265 main.go:141] libmachine: (addons-001438)   </devices>
	I0916 10:21:43.907814   12265 main.go:141] libmachine: (addons-001438) </domain>
	I0916 10:21:43.907826   12265 main.go:141] libmachine: (addons-001438) 
	I0916 10:21:43.913322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:98:e7:17 in network default
	I0916 10:21:43.913924   12265 main.go:141] libmachine: (addons-001438) Ensuring networks are active...
	I0916 10:21:43.913942   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:43.914588   12265 main.go:141] libmachine: (addons-001438) Ensuring network default is active
	I0916 10:21:43.914879   12265 main.go:141] libmachine: (addons-001438) Ensuring network mk-addons-001438 is active
	I0916 10:21:43.915337   12265 main.go:141] libmachine: (addons-001438) Getting domain xml...
	I0916 10:21:43.915987   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:45.289678   12265 main.go:141] libmachine: (addons-001438) Waiting to get IP...
	I0916 10:21:45.290387   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.290811   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.290836   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.290776   12287 retry.go:31] will retry after 253.823507ms: waiting for machine to come up
	I0916 10:21:45.546308   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.546737   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.546757   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.546713   12287 retry.go:31] will retry after 316.98215ms: waiting for machine to come up
	I0916 10:21:45.865275   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.865712   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.865742   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.865673   12287 retry.go:31] will retry after 438.875906ms: waiting for machine to come up
	I0916 10:21:46.306361   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.306829   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.306854   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.306787   12287 retry.go:31] will retry after 378.922529ms: waiting for machine to come up
	I0916 10:21:46.687272   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.687683   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.687718   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.687648   12287 retry.go:31] will retry after 695.664658ms: waiting for machine to come up
	I0916 10:21:47.384623   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:47.385017   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:47.385044   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:47.384985   12287 retry.go:31] will retry after 669.1436ms: waiting for machine to come up
	I0916 10:21:48.056603   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.057159   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.057183   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.057099   12287 retry.go:31] will retry after 739.217064ms: waiting for machine to come up
	I0916 10:21:48.798348   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.798788   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.798824   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.798748   12287 retry.go:31] will retry after 963.828739ms: waiting for machine to come up
	I0916 10:21:49.763677   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:49.764095   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:49.764120   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:49.764043   12287 retry.go:31] will retry after 1.625531991s: waiting for machine to come up
	I0916 10:21:51.391980   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:51.392322   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:51.392343   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:51.392285   12287 retry.go:31] will retry after 1.960554167s: waiting for machine to come up
	I0916 10:21:53.354469   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:53.354989   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:53.355016   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:53.354937   12287 retry.go:31] will retry after 2.035806393s: waiting for machine to come up
	I0916 10:21:55.393065   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:55.393432   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:55.393451   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:55.393400   12287 retry.go:31] will retry after 3.028756428s: waiting for machine to come up
	I0916 10:21:58.424174   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:58.424544   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:58.424577   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:58.424517   12287 retry.go:31] will retry after 3.769682763s: waiting for machine to come up
	I0916 10:22:02.198084   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:02.198470   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:22:02.198492   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:22:02.198430   12287 retry.go:31] will retry after 5.547519077s: waiting for machine to come up
	I0916 10:22:07.750830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751191   12265 main.go:141] libmachine: (addons-001438) Found IP for machine: 192.168.39.72
	I0916 10:22:07.751209   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has current primary IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751215   12265 main.go:141] libmachine: (addons-001438) Reserving static IP address...
	I0916 10:22:07.751548   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "addons-001438", mac: "52:54:00:9c:55:19", ip: "192.168.39.72"} in network mk-addons-001438
	I0916 10:22:07.821469   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:07.821506   12265 main.go:141] libmachine: (addons-001438) Reserved static IP address: 192.168.39.72
	I0916 10:22:07.821523   12265 main.go:141] libmachine: (addons-001438) Waiting for SSH to be available...
	I0916 10:22:07.823797   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.824029   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438
	I0916 10:22:07.824057   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find defined IP address of network mk-addons-001438 interface with MAC address 52:54:00:9c:55:19
	I0916 10:22:07.824199   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:07.824226   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:07.824261   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:07.824273   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:07.824297   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:07.835394   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: exit status 255: 
	I0916 10:22:07.835415   12265 main.go:141] libmachine: (addons-001438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 10:22:07.835421   12265 main.go:141] libmachine: (addons-001438) DBG | command : exit 0
	I0916 10:22:07.835428   12265 main.go:141] libmachine: (addons-001438) DBG | err     : exit status 255
	I0916 10:22:07.835435   12265 main.go:141] libmachine: (addons-001438) DBG | output  : 
	I0916 10:22:10.838181   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:10.840410   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840805   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.840830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840953   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:10.840980   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:10.841012   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:10.841026   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:10.841039   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:10.969218   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: <nil>: 
	I0916 10:22:10.969498   12265 main.go:141] libmachine: (addons-001438) KVM machine creation complete!
	I0916 10:22:10.969791   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:10.970351   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970568   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970704   12265 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:22:10.970716   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:10.971844   12265 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:22:10.971857   12265 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:22:10.971863   12265 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:22:10.971871   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:10.973963   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974287   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.974322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974443   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:10.974600   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974766   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974897   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:10.975056   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:10.975258   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:10.975270   12265 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:22:11.084303   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.084322   12265 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:22:11.084329   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.086985   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087399   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.087449   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087637   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.087805   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.087957   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.088052   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.088212   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.088404   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.088420   12265 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:22:11.197622   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:22:11.197666   12265 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:22:11.197674   12265 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:22:11.197683   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.197922   12265 buildroot.go:166] provisioning hostname "addons-001438"
	I0916 10:22:11.197936   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.198131   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.200614   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.200955   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.200988   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.201100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.201269   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201396   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201536   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.201681   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.201878   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.201891   12265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-001438 && echo "addons-001438" | sudo tee /etc/hostname
	I0916 10:22:11.329393   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-001438
	
	I0916 10:22:11.329423   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.332085   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332370   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.332397   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332557   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.332746   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332868   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332999   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.333118   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.333336   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.333353   12265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-001438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-001438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-001438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:22:11.454462   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.454486   12265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:22:11.454539   12265 buildroot.go:174] setting up certificates
	I0916 10:22:11.454553   12265 provision.go:84] configureAuth start
	I0916 10:22:11.454562   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.454823   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:11.457458   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.457872   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.457902   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.458065   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.460166   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460456   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.460484   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460579   12265 provision.go:143] copyHostCerts
	I0916 10:22:11.460674   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:22:11.460835   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:22:11.460925   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:22:11.460997   12265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.addons-001438 san=[127.0.0.1 192.168.39.72 addons-001438 localhost minikube]
	I0916 10:22:11.639072   12265 provision.go:177] copyRemoteCerts
	I0916 10:22:11.639141   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:22:11.639169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.641767   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642050   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.642076   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642240   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.642415   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.642519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.642635   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:11.727509   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:22:11.752436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:22:11.776436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:22:11.799597   12265 provision.go:87] duration metric: took 345.032702ms to configureAuth
	I0916 10:22:11.799626   12265 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:22:11.799813   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:11.799904   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.802386   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.802700   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802854   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.803047   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803187   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803323   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.803504   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.803689   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.803704   12265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:22:12.030350   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:22:12.030374   12265 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:22:12.030382   12265 main.go:141] libmachine: (addons-001438) Calling .GetURL
	I0916 10:22:12.031607   12265 main.go:141] libmachine: (addons-001438) DBG | Using libvirt version 6000000
	I0916 10:22:12.034008   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034296   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.034325   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034451   12265 main.go:141] libmachine: Docker is up and running!
	I0916 10:22:12.034463   12265 main.go:141] libmachine: Reticulating splines...
	I0916 10:22:12.034470   12265 client.go:171] duration metric: took 28.959474569s to LocalClient.Create
	I0916 10:22:12.034491   12265 start.go:167] duration metric: took 28.959547297s to libmachine.API.Create "addons-001438"
	I0916 10:22:12.034500   12265 start.go:293] postStartSetup for "addons-001438" (driver="kvm2")
	I0916 10:22:12.034509   12265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:22:12.034535   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.034731   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:22:12.034762   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.036747   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037041   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.037068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037200   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.037344   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.037486   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.037623   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.123403   12265 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:22:12.127815   12265 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:22:12.127838   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:22:12.127904   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:22:12.127926   12265 start.go:296] duration metric: took 93.420957ms for postStartSetup
	I0916 10:22:12.127955   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:12.128519   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.131232   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131510   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.131547   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131776   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:22:12.131949   12265 start.go:128] duration metric: took 29.075237515s to createHost
	I0916 10:22:12.131975   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.133967   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134281   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.134305   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134418   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.134606   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134753   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134877   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.135036   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:12.135185   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:12.135202   12265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:22:12.245734   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482132.226578519
	
	I0916 10:22:12.245757   12265 fix.go:216] guest clock: 1726482132.226578519
	I0916 10:22:12.245764   12265 fix.go:229] Guest: 2024-09-16 10:22:12.226578519 +0000 UTC Remote: 2024-09-16 10:22:12.131960304 +0000 UTC m=+29.174301435 (delta=94.618215ms)
	I0916 10:22:12.245784   12265 fix.go:200] guest clock delta is within tolerance: 94.618215ms
	I0916 10:22:12.245790   12265 start.go:83] releasing machines lock for "addons-001438", held for 29.189143417s
	I0916 10:22:12.245809   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.246014   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.248419   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248678   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.248704   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248832   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249314   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249485   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249586   12265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:22:12.249653   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.249707   12265 ssh_runner.go:195] Run: cat /version.json
	I0916 10:22:12.249728   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.252249   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252497   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252634   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252657   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252757   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.252904   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252922   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.252925   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.253038   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.253093   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253241   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.253258   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.253386   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253515   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.362639   12265 ssh_runner.go:195] Run: systemctl --version
	I0916 10:22:12.368512   12265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:22:12.527002   12265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:22:12.532733   12265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:22:12.532791   12265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:22:12.548743   12265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:22:12.548773   12265 start.go:495] detecting cgroup driver to use...
	I0916 10:22:12.548843   12265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:22:12.564219   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:22:12.578224   12265 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:22:12.578276   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:22:12.591434   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:22:12.604674   12265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:22:12.712713   12265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:22:12.868881   12265 docker.go:233] disabling docker service ...
	I0916 10:22:12.868945   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:22:12.883262   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:22:12.896034   12265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:22:13.009183   12265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:22:13.123591   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:22:13.137411   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:22:13.155768   12265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:22:13.155832   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.166378   12265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:22:13.166436   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.177199   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.187753   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.198460   12265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:22:13.209356   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.220222   12265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.237721   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.247992   12265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:22:13.257214   12265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:22:13.257274   12265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:22:13.269843   12265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:22:13.279361   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:13.392424   12265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:22:13.489919   12265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:22:13.490002   12265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:22:13.495269   12265 start.go:563] Will wait 60s for crictl version
	I0916 10:22:13.495342   12265 ssh_runner.go:195] Run: which crictl
	I0916 10:22:13.499375   12265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:22:13.543037   12265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:22:13.543161   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.571422   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.600892   12265 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:22:13.602164   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:13.604725   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605053   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:13.605090   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605239   12265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:22:13.609153   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:13.621451   12265 kubeadm.go:883] updating cluster {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:22:13.621560   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:13.621616   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:13.653616   12265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:22:13.653695   12265 ssh_runner.go:195] Run: which lz4
	I0916 10:22:13.657722   12265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:22:13.661843   12265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:22:13.661873   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:22:14.968986   12265 crio.go:462] duration metric: took 1.311298771s to copy over tarball
	I0916 10:22:14.969053   12265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:22:17.073836   12265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104757919s)
	I0916 10:22:17.073872   12265 crio.go:469] duration metric: took 2.104858266s to extract the tarball
	I0916 10:22:17.073881   12265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:22:17.110316   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:17.150207   12265 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:22:17.150233   12265 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:22:17.150241   12265 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.1 crio true true} ...
	I0916 10:22:17.150343   12265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-001438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:22:17.150424   12265 ssh_runner.go:195] Run: crio config
	I0916 10:22:17.195725   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:17.195746   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:17.195756   12265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:22:17.195774   12265 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-001438 NodeName:addons-001438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:22:17.195915   12265 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-001438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:22:17.195969   12265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:22:17.206079   12265 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:22:17.206139   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:22:17.215719   12265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 10:22:17.232125   12265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:22:17.248126   12265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:22:17.264165   12265 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0916 10:22:17.267727   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:17.279787   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:17.393283   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:17.410756   12265 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438 for IP: 192.168.39.72
	I0916 10:22:17.410774   12265 certs.go:194] generating shared ca certs ...
	I0916 10:22:17.410794   12265 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.410949   12265 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:22:17.480758   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt ...
	I0916 10:22:17.480787   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt: {Name:mkc291c3a986acc7f4de9183c4ef6d249d8de5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.480965   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key ...
	I0916 10:22:17.480980   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key: {Name:mk56bc8b146d891ba5f741ad0bd339fffdb85989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.481075   12265 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:22:17.673219   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt ...
	I0916 10:22:17.673250   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt: {Name:mk8d6878492eab0d99f630fc495324e3b843781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673403   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key ...
	I0916 10:22:17.673414   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key: {Name:mk082b50320d253da8f01ad2454b69492e000fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673482   12265 certs.go:256] generating profile certs ...
	I0916 10:22:17.673531   12265 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key
	I0916 10:22:17.673544   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt with IP's: []
	I0916 10:22:17.921779   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt ...
	I0916 10:22:17.921811   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: {Name:mk9172b9e8f20da0dd399e583d4f0391784c25bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.921970   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key ...
	I0916 10:22:17.921981   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key: {Name:mk65d84f1710f9ab616402324cb2a91f749aa3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.922048   12265 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03
	I0916 10:22:17.922066   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0916 10:22:17.984449   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 ...
	I0916 10:22:17.984473   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03: {Name:mk697c0092db030ad4df50333f6d1db035d298e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984627   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 ...
	I0916 10:22:17.984638   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03: {Name:mkf74035add612ea1883fde9b662a919a8d7c5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984705   12265 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt
	I0916 10:22:17.984774   12265 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key
	I0916 10:22:17.984818   12265 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key
	I0916 10:22:17.984834   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt with IP's: []
	I0916 10:22:18.105094   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt ...
	I0916 10:22:18.105122   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt: {Name:mk12379583893d02aa599284bf7c2e673e4a585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105290   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key ...
	I0916 10:22:18.105300   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key: {Name:mkddc10c89aa36609a41c940a83606fa36ac69df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105453   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:22:18.105484   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:22:18.105509   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:22:18.105531   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:22:18.106125   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:22:18.132592   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:22:18.173674   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:22:18.200455   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:22:18.223366   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:22:18.246242   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:22:18.269411   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:22:18.292157   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:22:18.314508   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:22:18.337365   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:22:18.353286   12265 ssh_runner.go:195] Run: openssl version
	I0916 10:22:18.358942   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:22:18.369103   12265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373299   12265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373346   12265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.378948   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:22:18.389436   12265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:22:18.393342   12265 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:22:18.393387   12265 kubeadm.go:392] StartCluster: {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:18.393452   12265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:22:18.393509   12265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:22:18.429056   12265 cri.go:89] found id: ""
	I0916 10:22:18.429118   12265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:22:18.439123   12265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:22:18.448797   12265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:22:18.458281   12265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:22:18.458303   12265 kubeadm.go:157] found existing configuration files:
	
	I0916 10:22:18.458357   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:22:18.467304   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:22:18.467373   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:22:18.476476   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:22:18.485402   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:22:18.485467   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:22:18.494643   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.503578   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:22:18.503657   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.512633   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:22:18.521391   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:22:18.521454   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:22:18.530381   12265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:22:18.584992   12265 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:22:18.585058   12265 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:22:18.700906   12265 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:22:18.701050   12265 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:22:18.701195   12265 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:22:18.712665   12265 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:22:18.808124   12265 out.go:235]   - Generating certificates and keys ...
	I0916 10:22:18.808238   12265 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:22:18.808308   12265 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:22:18.808390   12265 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:22:18.884612   12265 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:22:19.103481   12265 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:22:19.230175   12265 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:22:19.422850   12265 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:22:19.423077   12265 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.499430   12265 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:22:19.499746   12265 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.689533   12265 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:22:19.770560   12265 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:22:20.159783   12265 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:22:20.160053   12265 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:22:20.575897   12265 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:22:20.728566   12265 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:22:21.092038   12265 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:22:21.382957   12265 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:22:21.446452   12265 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:22:21.447068   12265 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:22:21.451577   12265 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:22:21.454426   12265 out.go:235]   - Booting up control plane ...
	I0916 10:22:21.454540   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:22:21.454614   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:22:21.454722   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:22:21.468531   12265 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:22:21.475700   12265 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:22:21.475767   12265 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:22:21.606009   12265 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:22:21.606143   12265 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:22:22.124369   12265 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.881759ms
	I0916 10:22:22.124492   12265 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:22:27.123389   12265 kubeadm.go:310] [api-check] The API server is healthy after 5.002163965s
	I0916 10:22:27.138636   12265 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:22:27.154171   12265 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:22:27.185604   12265 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:22:27.185839   12265 kubeadm.go:310] [mark-control-plane] Marking the node addons-001438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:22:27.198602   12265 kubeadm.go:310] [bootstrap-token] Using token: os1o8m.q16efzg2rjnkpln8
	I0916 10:22:27.199966   12265 out.go:235]   - Configuring RBAC rules ...
	I0916 10:22:27.200085   12265 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:22:27.209733   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:22:27.218630   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:22:27.222473   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:22:27.226151   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:22:27.230516   12265 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:22:27.529586   12265 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:22:27.967178   12265 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:22:28.529936   12265 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:22:28.529960   12265 kubeadm.go:310] 
	I0916 10:22:28.530028   12265 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:22:28.530044   12265 kubeadm.go:310] 
	I0916 10:22:28.530137   12265 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:22:28.530173   12265 kubeadm.go:310] 
	I0916 10:22:28.530227   12265 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:22:28.530307   12265 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:22:28.530390   12265 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:22:28.530397   12265 kubeadm.go:310] 
	I0916 10:22:28.530463   12265 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:22:28.530472   12265 kubeadm.go:310] 
	I0916 10:22:28.530525   12265 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:22:28.530537   12265 kubeadm.go:310] 
	I0916 10:22:28.530609   12265 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:22:28.530728   12265 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:22:28.530832   12265 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:22:28.530868   12265 kubeadm.go:310] 
	I0916 10:22:28.530981   12265 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:22:28.531080   12265 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:22:28.531091   12265 kubeadm.go:310] 
	I0916 10:22:28.531215   12265 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531358   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:22:28.531389   12265 kubeadm.go:310] 	--control-plane 
	I0916 10:22:28.531397   12265 kubeadm.go:310] 
	I0916 10:22:28.531518   12265 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:22:28.531528   12265 kubeadm.go:310] 
	I0916 10:22:28.531639   12265 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531783   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:22:28.532220   12265 kubeadm.go:310] W0916 10:22:18.568727     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532498   12265 kubeadm.go:310] W0916 10:22:18.569597     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532623   12265 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:22:28.532635   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.532642   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:28.534239   12265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:22:28.535682   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:22:28.547306   12265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:22:28.567029   12265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:22:28.567083   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:28.567116   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-001438 minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-001438 minikube.k8s.io/primary=true
	I0916 10:22:28.599898   12265 ops.go:34] apiserver oom_adj: -16
	I0916 10:22:28.718193   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.219097   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.718331   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.219213   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.718728   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.218997   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.719218   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.218543   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.335651   12265 kubeadm.go:1113] duration metric: took 3.768632423s to wait for elevateKubeSystemPrivileges
	I0916 10:22:32.335685   12265 kubeadm.go:394] duration metric: took 13.942299744s to StartCluster
	I0916 10:22:32.335709   12265 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.335851   12265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:22:32.336274   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.336491   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:22:32.336522   12265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:22:32.336653   12265 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:22:32.336724   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.336769   12265 addons.go:69] Setting default-storageclass=true in profile "addons-001438"
	I0916 10:22:32.336779   12265 addons.go:69] Setting ingress-dns=true in profile "addons-001438"
	I0916 10:22:32.336787   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-001438"
	I0916 10:22:32.336780   12265 addons.go:69] Setting ingress=true in profile "addons-001438"
	I0916 10:22:32.336793   12265 addons.go:69] Setting cloud-spanner=true in profile "addons-001438"
	I0916 10:22:32.336813   12265 addons.go:69] Setting inspektor-gadget=true in profile "addons-001438"
	I0916 10:22:32.336820   12265 addons.go:69] Setting gcp-auth=true in profile "addons-001438"
	I0916 10:22:32.336832   12265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-001438"
	I0916 10:22:32.336835   12265 addons.go:234] Setting addon cloud-spanner=true in "addons-001438"
	I0916 10:22:32.336828   12265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-001438"
	I0916 10:22:32.336844   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-001438"
	I0916 10:22:32.336825   12265 addons.go:234] Setting addon inspektor-gadget=true in "addons-001438"
	I0916 10:22:32.336853   12265 addons.go:69] Setting registry=true in profile "addons-001438"
	I0916 10:22:32.336867   12265 addons.go:234] Setting addon registry=true in "addons-001438"
	I0916 10:22:32.336883   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336888   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336896   12265 addons.go:69] Setting helm-tiller=true in profile "addons-001438"
	I0916 10:22:32.336908   12265 addons.go:234] Setting addon helm-tiller=true in "addons-001438"
	I0916 10:22:32.336937   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336940   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336844   12265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-001438"
	I0916 10:22:32.337250   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337262   12265 addons.go:69] Setting volcano=true in profile "addons-001438"
	I0916 10:22:32.337273   12265 addons.go:234] Setting addon volcano=true in "addons-001438"
	I0916 10:22:32.337281   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337295   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337315   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336808   12265 addons.go:234] Setting addon ingress=true in "addons-001438"
	I0916 10:22:32.337347   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337348   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337365   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337367   12265 addons.go:69] Setting volumesnapshots=true in profile "addons-001438"
	I0916 10:22:32.337379   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337381   12265 addons.go:234] Setting addon volumesnapshots=true in "addons-001438"
	I0916 10:22:32.337412   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336888   12265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:32.337442   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336769   12265 addons.go:69] Setting yakd=true in profile "addons-001438"
	I0916 10:22:32.337489   12265 addons.go:234] Setting addon yakd=true in "addons-001438"
	I0916 10:22:32.337633   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337660   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336835   12265 addons.go:69] Setting metrics-server=true in profile "addons-001438"
	I0916 10:22:32.337353   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337714   12265 addons.go:234] Setting addon metrics-server=true in "addons-001438"
	I0916 10:22:32.337741   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337700   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337795   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336844   12265 mustload.go:65] Loading cluster: addons-001438
	I0916 10:22:32.336824   12265 addons.go:69] Setting storage-provisioner=true in profile "addons-001438"
	I0916 10:22:32.337840   12265 addons.go:234] Setting addon storage-provisioner=true in "addons-001438"
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337881   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336804   12265 addons.go:234] Setting addon ingress-dns=true in "addons-001438"
	I0916 10:22:32.337251   12265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-001438"
	I0916 10:22:32.337944   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338072   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338099   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338127   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338301   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338331   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338413   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338421   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338448   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338455   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338446   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338765   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338792   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338818   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338845   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338995   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.339304   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.339363   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.342405   12265 out.go:177] * Verifying Kubernetes components...
	I0916 10:22:32.343665   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:32.357106   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0916 10:22:32.357444   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0916 10:22:32.357655   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0916 10:22:32.357797   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.357897   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358211   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358403   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358419   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358562   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358574   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358633   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0916 10:22:32.358790   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.358949   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358960   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.359007   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0916 10:22:32.369699   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.369748   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.369818   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370020   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370060   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370069   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370101   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370194   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370269   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370379   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.370390   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.370789   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370827   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370908   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370969   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.371094   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.371111   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.371475   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371508   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371573   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.371638   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371663   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371731   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.386697   12265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-001438"
	I0916 10:22:32.386747   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.386763   12265 addons.go:234] Setting addon default-storageclass=true in "addons-001438"
	I0916 10:22:32.386810   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.387114   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387173   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.387252   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387291   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.408433   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0916 10:22:32.409200   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.409836   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.409856   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.410249   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.410830   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.410872   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.411145   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0916 10:22:32.411578   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.413298   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.413319   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.414168   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0916 10:22:32.414190   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0916 10:22:32.414292   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0916 10:22:32.414570   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.414671   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.415178   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.415195   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.415681   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.416214   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.416252   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.416442   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0916 10:22:32.416592   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417197   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.417231   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.417415   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0916 10:22:32.417454   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417595   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.417608   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.417843   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417917   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418037   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.418050   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.418410   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.418443   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.418409   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418501   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.419031   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.419065   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.419266   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419281   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419404   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419414   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419702   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.419847   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.420545   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.421091   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.421133   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.421574   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.421979   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0916 10:22:32.422963   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.423382   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.423399   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.423697   12265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:22:32.423813   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.424320   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.424354   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.425846   12265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:22:32.425941   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 10:22:32.426062   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0916 10:22:32.426213   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0916 10:22:32.426381   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426757   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426931   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.426942   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.426976   12265 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:22:32.426992   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:22:32.427011   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.427391   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.427470   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.427489   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.427946   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.428354   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428385   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.428598   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.428889   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428924   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.429188   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.429202   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.429517   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.431934   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0916 10:22:32.431987   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432541   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.432563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432751   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.432883   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.432998   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.433120   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.433712   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.435531   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.435730   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435742   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.435888   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.435899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:32.435907   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435913   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.436070   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.436085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 10:22:32.436166   12265 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:22:32.440699   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0916 10:22:32.441072   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.441617   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.441644   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.441979   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.442497   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.442531   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.450769   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0916 10:22:32.451259   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.451700   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.451718   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.452549   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.453092   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.453146   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.454430   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0916 10:22:32.454743   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455061   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455149   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0916 10:22:32.455842   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455847   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455860   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455871   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455922   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.456243   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456542   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456622   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.456639   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.456747   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.457901   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0916 10:22:32.458037   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.458209   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.458254   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.458704   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.458721   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.459089   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.459296   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.459533   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.460121   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.460511   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.460545   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.460978   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0916 10:22:32.461180   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.461244   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.461735   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.461753   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.461805   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.462195   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0916 10:22:32.462331   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.462809   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.464034   12265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:22:32.464150   12265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:22:32.464278   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.464668   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.464696   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.465237   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.466010   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.465566   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0916 10:22:32.466246   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:22:32.466259   12265 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:22:32.466276   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467014   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.467145   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.467235   12265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:22:32.467359   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:22:32.467370   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:22:32.467385   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467696   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.467711   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.468100   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468326   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.468710   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:22:32.468725   12265 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:22:32.468742   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.468966   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:22:32.469146   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.469463   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.469917   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.469918   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.470000   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.470971   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0916 10:22:32.471473   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.471695   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.472001   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.472015   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.472269   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:22:32.472471   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472523   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0916 10:22:32.472664   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472783   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.472993   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.473106   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.473134   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.473329   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.473377   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.473597   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.473743   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.473790   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.473851   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.474147   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:32.474163   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:22:32.474178   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.474793   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.474941   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.474955   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.475234   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.475510   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.475619   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475650   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.475665   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475824   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.476100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.476264   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.476604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.476644   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.476828   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.476940   12265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:22:32.477612   12265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:22:32.478260   12265 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.478276   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:22:32.478291   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.478585   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.478604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.478880   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.479035   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.479168   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.479299   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.479921   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.479937   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:22:32.479951   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.480259   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.480742   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.481958   12265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:22:32.482834   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0916 10:22:32.482998   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483118   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483310   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.483473   12265 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:22:32.483494   12265 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:22:32.483512   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.483802   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.483828   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.483888   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483903   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483899   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483930   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.484092   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.484159   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484194   12265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:22:32.484411   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.484581   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.484636   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484681   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.484892   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.484958   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.485096   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.485218   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.485248   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.485262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.485372   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.485494   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.485505   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:22:32.485519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.485781   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.486028   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.486181   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.486318   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.487186   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487422   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.487675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.487695   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487742   12265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.487752   12265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:22:32.487764   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.487810   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.487995   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.488225   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.488378   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.489702   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490168   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.490188   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490394   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.490571   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.490713   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.490823   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.492068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492458   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.492479   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492686   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.492906   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.492915   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0916 10:22:32.493044   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.493239   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.493450   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.493933   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.493950   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.494562   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.494891   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.496932   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.498147   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0916 10:22:32.498828   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:22:32.499232   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.499608   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.499634   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.499936   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.500124   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.500215   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:22:32.500241   12265 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:22:32.500262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.501763   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.503323   12265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:22:32.503738   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504260   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.504287   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504422   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.504578   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.504721   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.504800   12265 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:32.504813   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:22:32.504828   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.504844   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.507073   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0916 10:22:32.507489   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.507971   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.507994   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.508014   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 10:22:32.508383   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.508455   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0916 10:22:32.508996   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.509012   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509054   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509082   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509517   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.509552   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.509551   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.509573   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509882   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510086   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.510151   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.510169   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.510570   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.510576   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510696   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.510739   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.510822   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.510947   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.511685   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.511711   12265 retry.go:31] will retry after 323.390168ms: ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.513110   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.513548   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.515216   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:22:32.516467   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:22:32.517228   12265 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:22:32.518463   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:22:32.519691   12265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:22:32.521193   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:22:32.521287   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:32.521309   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:22:32.521330   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.523957   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:22:32.524563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.524915   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.524939   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.525078   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.525271   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.525408   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.525548   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.526174   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526199   12265 retry.go:31] will retry after 208.869548ms: ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526327   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:22:32.527568   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:22:32.528811   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:22:32.530140   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:22:32.530154   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:22:32.530169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.533281   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533666   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.533688   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533886   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.534072   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.534227   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.534367   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.700911   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:32.700984   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:22:32.785482   12265 node_ready.go:35] waiting up to 6m0s for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822842   12265 node_ready.go:49] node "addons-001438" has status "Ready":"True"
	I0916 10:22:32.822881   12265 node_ready.go:38] duration metric: took 37.361645ms for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822895   12265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:32.861506   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:22:32.861543   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:22:32.862634   12265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:32.929832   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.943014   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.952437   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.991347   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.995067   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:22:32.995096   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:22:33.036627   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:22:33.036657   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:22:33.036890   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:33.060821   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:22:33.060843   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:22:33.069120   12265 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:22:33.069156   12265 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:22:33.070018   12265 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:22:33.070038   12265 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:22:33.073512   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:22:33.073535   12265 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:22:33.137058   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:22:33.137088   12265 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:22:33.226855   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.226884   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:22:33.270492   12265 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:22:33.270513   12265 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:22:33.316169   12265 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.316195   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:22:33.316355   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:22:33.316373   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:22:33.316509   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:22:33.316522   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:22:33.327110   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:22:33.327126   12265 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:22:33.354597   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.420390   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:33.435680   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:22:33.435717   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:22:33.439954   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:22:33.439978   12265 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:22:33.444981   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.445002   12265 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:22:33.522524   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:33.536060   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:22:33.536089   12265 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:22:33.569830   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.590335   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:22:33.590366   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:22:33.601121   12265 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:22:33.601154   12265 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:22:33.623197   12265 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.623219   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:22:33.629904   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.693404   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.693424   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:22:33.747802   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.761431   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:22:33.761461   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:22:33.774811   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:22:33.774845   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:22:33.825893   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.895859   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:22:33.895893   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:22:34.018321   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:22:34.018345   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:22:34.260751   12265 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:22:34.260776   12265 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:22:34.288705   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:22:34.288733   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:22:34.575904   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:22:34.575932   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:22:34.578707   12265 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:34.578728   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:22:34.872174   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:35.002110   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:22:35.002133   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:22:35.053333   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47211504s)
	I0916 10:22:35.173178   12265 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.243289168s)
	I0916 10:22:35.173338   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173355   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.173706   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:35.173723   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.173737   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.173751   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173762   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.174037   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.174053   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.219712   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.219745   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.220033   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.220084   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.326225   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:22:35.326245   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:22:35.667079   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:35.667102   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:22:35.677467   12265 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-001438" context rescaled to 1 replicas
	I0916 10:22:36.005922   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:36.880549   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:37.248962   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.296492058s)
	I0916 10:22:37.249022   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249036   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.306004364s)
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.257675255s)
	I0916 10:22:37.249138   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249160   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249084   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249221   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249330   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249355   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249374   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249434   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249460   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249476   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249440   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249499   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249529   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249541   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249485   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249593   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249655   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249676   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251028   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251216   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251214   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251232   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251278   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251288   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:38.978538   12265 pod_ready.go:93] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:38.978561   12265 pod_ready.go:82] duration metric: took 6.115904528s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:38.978572   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179661   12265 pod_ready.go:93] pod "kube-apiserver-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.179691   12265 pod_ready.go:82] duration metric: took 201.112317ms for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179705   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377607   12265 pod_ready.go:93] pod "kube-controller-manager-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.377640   12265 pod_ready.go:82] duration metric: took 197.926831ms for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377656   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509747   12265 pod_ready.go:93] pod "kube-proxy-66flj" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.509775   12265 pod_ready.go:82] duration metric: took 132.110984ms for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509789   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633441   12265 pod_ready.go:93] pod "kube-scheduler-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.633475   12265 pod_ready.go:82] duration metric: took 123.676997ms for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633487   12265 pod_ready.go:39] duration metric: took 6.810577473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:39.633508   12265 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:22:39.633572   12265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:22:39.633966   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:22:39.634003   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:39.637511   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638022   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:39.638050   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638265   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:39.638449   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:39.638594   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:39.638741   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:40.248183   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:22:40.342621   12265 addons.go:234] Setting addon gcp-auth=true in "addons-001438"
	I0916 10:22:40.342682   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:40.343054   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.343105   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.358807   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0916 10:22:40.359276   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.359793   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.359818   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.360152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.360750   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.360794   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.375531   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0916 10:22:40.375999   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.376410   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.376431   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.376712   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.376880   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:40.378466   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:40.378706   12265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:22:40.378736   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:40.381488   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.381978   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:40.381997   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.382162   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:40.382374   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:40.382527   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:40.382728   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:41.185716   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.148787276s)
	I0916 10:22:41.185775   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185787   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185792   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.831162948s)
	I0916 10:22:41.185821   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185842   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185899   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.76548291s)
	I0916 10:22:41.185927   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185929   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.663383888s)
	I0916 10:22:41.185940   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185948   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185957   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186031   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616165984s)
	I0916 10:22:41.186072   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186084   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186162   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.55623363s)
	I0916 10:22:41.186179   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186188   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186223   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186233   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186246   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186249   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186272   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186279   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186321   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.438489786s)
	W0916 10:22:41.186349   12265 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186370   12265 retry.go:31] will retry after 282.502814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186323   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186452   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.360528333s)
	I0916 10:22:41.186474   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186483   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186530   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186552   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186580   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186592   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.133220852s)
	I0916 10:22:41.186602   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186608   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186609   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186627   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186684   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186691   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186698   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186704   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186797   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186826   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186833   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186851   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186871   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186884   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186893   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186901   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186907   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186936   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186943   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186990   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186999   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187006   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187013   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.187860   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.187892   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.187899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187912   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.188173   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.188191   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188200   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188204   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188209   12265 addons.go:475] Verifying addon metrics-server=true in "addons-001438"
	I0916 10:22:41.188211   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188241   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188250   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188259   12265 addons.go:475] Verifying addon ingress=true in "addons-001438"
	I0916 10:22:41.190004   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190036   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190042   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190099   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190137   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190141   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190152   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190155   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190159   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.190162   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190167   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.190170   12265 addons.go:475] Verifying addon registry=true in "addons-001438"
	I0916 10:22:41.190534   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190570   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190579   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.191944   12265 out.go:177] * Verifying registry addon...
	I0916 10:22:41.191953   12265 out.go:177] * Verifying ingress addon...
	I0916 10:22:41.192858   12265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-001438 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:22:41.245022   12265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:22:41.245042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:41.245048   12265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:22:41.245062   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.270906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.270924   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.271190   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.271210   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.469044   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:41.699366   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.699576   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.200823   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.201220   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.707853   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.708238   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.062276   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.056308906s)
	I0916 10:22:43.062328   12265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.428733709s)
	I0916 10:22:43.062359   12265 api_server.go:72] duration metric: took 10.72580389s to wait for apiserver process to appear ...
	I0916 10:22:43.062372   12265 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:22:43.062397   12265 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0916 10:22:43.062411   12265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.683683571s)
	I0916 10:22:43.062334   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062455   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.062799   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:43.062819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.062830   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.062838   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062846   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.063060   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.063085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.063094   12265 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:43.064955   12265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:22:43.065015   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:43.066605   12265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:22:43.067509   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:22:43.067847   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:22:43.067859   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:22:43.093271   12265 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0916 10:22:43.093820   12265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:22:43.093839   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.095011   12265 api_server.go:141] control plane version: v1.31.1
	I0916 10:22:43.095033   12265 api_server.go:131] duration metric: took 32.653755ms to wait for apiserver health ...
	I0916 10:22:43.095043   12265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:22:43.123828   12265 system_pods.go:59] 19 kube-system pods found
	I0916 10:22:43.123858   12265 system_pods.go:61] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.123864   12265 system_pods.go:61] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.123871   12265 system_pods.go:61] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.123876   12265 system_pods.go:61] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.123883   12265 system_pods.go:61] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.123886   12265 system_pods.go:61] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.123903   12265 system_pods.go:61] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.123906   12265 system_pods.go:61] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.123913   12265 system_pods.go:61] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.123917   12265 system_pods.go:61] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.123923   12265 system_pods.go:61] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.123928   12265 system_pods.go:61] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.123935   12265 system_pods.go:61] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.123943   12265 system_pods.go:61] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.123948   12265 system_pods.go:61] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.123955   12265 system_pods.go:61] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123960   12265 system_pods.go:61] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123967   12265 system_pods.go:61] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.123972   12265 system_pods.go:61] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.123980   12265 system_pods.go:74] duration metric: took 28.931422ms to wait for pod list to return data ...
	I0916 10:22:43.123988   12265 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:22:43.137057   12265 default_sa.go:45] found service account: "default"
	I0916 10:22:43.137084   12265 default_sa.go:55] duration metric: took 13.088793ms for default service account to be created ...
	I0916 10:22:43.137095   12265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:22:43.166020   12265 system_pods.go:86] 19 kube-system pods found
	I0916 10:22:43.166054   12265 system_pods.go:89] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.166063   12265 system_pods.go:89] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.166075   12265 system_pods.go:89] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.166088   12265 system_pods.go:89] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.166100   12265 system_pods.go:89] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.166108   12265 system_pods.go:89] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.166118   12265 system_pods.go:89] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.166126   12265 system_pods.go:89] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.166136   12265 system_pods.go:89] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.166145   12265 system_pods.go:89] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.166154   12265 system_pods.go:89] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.166164   12265 system_pods.go:89] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.166171   12265 system_pods.go:89] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.166178   12265 system_pods.go:89] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.166183   12265 system_pods.go:89] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.166199   12265 system_pods.go:89] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166207   12265 system_pods.go:89] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166217   12265 system_pods.go:89] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.166224   12265 system_pods.go:89] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.166231   12265 system_pods.go:126] duration metric: took 29.130167ms to wait for k8s-apps to be running ...
	I0916 10:22:43.166241   12265 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:22:43.166284   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:22:43.202957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.204822   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:43.205240   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:22:43.205259   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:22:43.339484   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.339511   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:22:43.533725   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.574829   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.701096   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.702516   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.074326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.199962   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.201086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:44.420432   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.951340242s)
	I0916 10:22:44.420484   12265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.25416987s)
	I0916 10:22:44.420496   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.420512   12265 system_svc.go:56] duration metric: took 1.254267923s WaitForService to wait for kubelet
	I0916 10:22:44.420530   12265 kubeadm.go:582] duration metric: took 12.083973387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:22:44.420555   12265 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:22:44.420516   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.420960   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.420998   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421011   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.421019   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.421041   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.421242   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.421289   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421306   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.432407   12265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:22:44.432433   12265 node_conditions.go:123] node cpu capacity is 2
	I0916 10:22:44.432443   12265 node_conditions.go:105] duration metric: took 11.883273ms to run NodePressure ...
	I0916 10:22:44.432454   12265 start.go:241] waiting for startup goroutines ...
	I0916 10:22:44.573423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.701968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.702167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.087788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.175284   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.64151941s)
	I0916 10:22:45.175340   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175356   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175638   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175658   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175667   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175675   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175907   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175959   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175966   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:45.176874   12265 addons.go:475] Verifying addon gcp-auth=true in "addons-001438"
	I0916 10:22:45.179151   12265 out.go:177] * Verifying gcp-auth addon...
	I0916 10:22:45.181042   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:22:45.204765   12265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:22:45.204788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.240576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.244884   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.572763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.684678   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.699294   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.700332   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.071926   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.184345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.198555   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.198584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.572691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.686213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.698404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.699290   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.073014   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.184892   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.199176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.199412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.573319   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.685117   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.698854   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.699042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.080702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.186042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.198652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:48.198985   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.572136   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.684922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.698643   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.698805   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.072263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.186126   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.198845   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.201291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.571909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.686134   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.699485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.699837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.072013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.185475   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.198803   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:50.198988   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.572410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.684716   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.698643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.698842   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.072735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.185327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.198402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.198563   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.574099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.684301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.698582   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.699135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.073280   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.184410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.197628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.197951   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.573111   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.685463   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.698350   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.698445   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.073318   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.185032   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.198371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.198982   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.572652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.684593   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.698434   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.699099   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.071466   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.184978   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.199125   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:54.199475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.684904   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.699578   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.700868   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.072026   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.186696   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.199421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.200454   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:55.811368   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.811883   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.811882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.812044   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.073000   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.197552   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.571945   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.684725   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.698164   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.698871   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.078099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.187093   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.198266   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.198788   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.572608   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.685182   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.698112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.698451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.072438   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.184226   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.197871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:58.199176   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.573655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.688012   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.698890   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.699498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.072908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.197825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.198094   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:59.572578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.685886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.699165   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.699539   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.072677   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.185334   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.198436   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.572620   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.684676   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.698184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.698937   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.368315   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.368647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:01.368662   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.369057   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.577610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.685792   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.699073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.700679   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.073297   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.184780   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.198423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.198632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.573860   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.688317   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.699137   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.699189   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.073268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.185286   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.197706   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:03.199446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.575016   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.688681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.697852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.699284   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.072561   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.184550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.198183   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.198692   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.573058   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.684410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.698448   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.699101   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.073082   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.198422   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.199510   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.572901   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.685013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.698419   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.699052   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.072680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.184899   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.199400   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.199960   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.573550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.698176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.386744   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.389015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:07.389529   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.391740   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.572440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.685517   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.699276   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.699495   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.073598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.185305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.198307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.198701   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.572936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.685042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.697898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.699045   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.073524   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.185170   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.197444   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.198282   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:09.571947   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.685269   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.700263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.700289   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.072367   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.184140   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.198279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.198501   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.571995   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.684443   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.698621   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.699212   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.072647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.184997   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.198336   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.199743   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.572138   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.684642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.697735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.698012   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.072087   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.184730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.198825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.199117   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.574471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.697610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.697875   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.074276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.200283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:13.200511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.572643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.687229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.700375   12265 kapi.go:107] duration metric: took 32.506622173s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:13.700476   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.073345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.185359   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.197920   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.714386   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.714848   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.072480   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.184006   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.198907   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.571536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.686990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.698314   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.072850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.397705   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.398059   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.571699   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.687893   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.701822   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.073078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.185433   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.202339   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.572915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.684909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.698215   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.071875   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.185548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.198104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.572180   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.684990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.698912   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.072105   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.184341   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.197977   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.571740   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.685205   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.698214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.071811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.184927   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.198225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.572184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.684471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.697550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.072526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.185439   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.198086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.573843   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.684530   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.699027   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.071583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.185751   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.201330   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.574078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.688728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.700516   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.072848   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.184719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.571975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.697845   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.071885   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.199755   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.209742   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.572903   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.684095   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.697255   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.072405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.185096   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.197451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.572250   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.685603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.699421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.072277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.197948   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.572954   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.684305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.698018   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.072121   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.186632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.198260   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.571710   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.685260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.697569   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.072712   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.185404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.197839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.572506   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.685719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.698390   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.073440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.185211   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.198135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.572871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.684795   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.698442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.074307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.184391   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.198195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.571684   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.686595   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.697829   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.072882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.184355   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.197913   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.572796   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.685340   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.697838   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.072358   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.185072   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.198841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.572260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.685619   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.697923   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.072329   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.184923   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.198461   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.572531   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.684886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.698221   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.071922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.184896   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.198347   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.572508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.685674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.698172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.072040   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.184401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.198192   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.571685   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.684934   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.699442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.072917   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.184575   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.197989   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.572782   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.685224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.697515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.073347   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.184633   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.198109   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.572239   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.684842   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.698412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.072639   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.184377   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.197723   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.572964   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.684944   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.698216   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.071865   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.184322   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.197583   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.572728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.697663   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.073346   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.184763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.198338   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.572748   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.688546   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.698337   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.072528   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.184742   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.197991   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.572832   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.685275   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.697957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.072948   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.185237   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.198222   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.572150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.685770   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.698107   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.072508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.198122   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.571791   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.685476   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.698021   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.072455   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.198450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.685519   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.698088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.073394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.184852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.198932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.572905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.685024   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.699000   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.072804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.185568   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.198040   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.571961   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.684879   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.698104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.071779   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.184794   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.198431   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.572786   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.685048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.701841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.072550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.184915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.198725   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.572850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.684405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.697953   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.075719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.185584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.198034   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.572642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.685074   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.697421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.072216   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.184736   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.198614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.572675   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.685508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.697632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.072878   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.185267   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.197508   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.684680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.698038   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.072225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.184256   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.197802   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.685760   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.699050   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.072698   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.185139   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.197417   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.572526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.684976   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.698186   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.071987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.184373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.197898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.573326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.685154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.699596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.071975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.184301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.197532   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.573068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.684535   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.698262   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.071830   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.185558   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.198149   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.684135   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.697614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.109030   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.216004   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.216775   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.572732   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.684811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.697899   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.071691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.198291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.572185   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.685478   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.698240   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.072727   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.185578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.207485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.684402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.698565   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.072447   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.192764   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.206954   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.573224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.685091   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.697997   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.071906   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.184428   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.197550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.572498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.685525   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.702647   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.072504   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.185219   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.197512   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.573858   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.685938   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.699556   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.080160   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.188056   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.197615   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.575213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.685187   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.697887   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.072585   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.185321   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.577876   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.685259   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.698763   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.073356   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.184332   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.197676   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.574632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.705119   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.705797   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.073702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.190460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.199492   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.573521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.685468   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.697671   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.074427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.211989   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.214167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.573479   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.684919   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.698441   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.184827   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.573401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.685277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.698457   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.072421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.184959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.198365   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.572446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.685036   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.697443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.072489   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.185143   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.197711   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.572704   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.685206   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.697839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.073656   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.185083   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.197443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.572739   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.685343   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.697853   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.072697   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.185630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.197928   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.572344   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.684814   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.698225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.073324   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.185254   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.198404   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.571987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.684709   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.698073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.072174   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.184688   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.198078   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.571798   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.685576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.698188   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.072810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.184683   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.198053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.574408   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.698415   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.072047   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.185423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.198010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.572968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.684217   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.697876   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.073276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.185372   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.197865   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.572327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.684929   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.698146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.073068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.185261   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.197596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.684379   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.697450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.072646   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.184810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.198157   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.684635   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.698108   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.073055   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.185325   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.572951   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.684268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.697542   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.073300   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.184458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.198058   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.571882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.684389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.698491   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.185150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.198444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.572557   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.686730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.697987   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.072389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.184902   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.198815   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.572090   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.684279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.072655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.185118   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.197515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.573029   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.684503   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.697942   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.073161   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.185394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.197824   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.572789   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.685536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.072248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.184713   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.198206   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.572681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.685404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.697732   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.073033   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.186532   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.197932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.573166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.684900   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.698494   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.072840   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.185112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.199554   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.573533   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.685513   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.698631   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.073563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.184668   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.198960   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.573373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.684371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.698226   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.072380   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.184889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.572427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.685015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.699053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.073225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.185241   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.198172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.572019   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.697511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.072382   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.185154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.198590   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.572333   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.688804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.699195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.072971   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.184395   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.197840   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.572457   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.684935   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.698247   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.072201   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.184817   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.198299   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.572603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.684807   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.698764   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.079460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.184783   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.198219   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.572155   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.684462   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.698249   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.071889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.185035   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.198639   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.572607   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.684993   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.698317   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.073167   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.187630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.197861   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.684449   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.698084   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.072598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.184553   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.198241   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.572543   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.685061   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.698066   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.072888   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.184279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.198475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.572908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.684166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.699214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.071396   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.185054   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.197274   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.571831   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.683617   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.073753   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.184818   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.198303   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.572754   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.685078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.697801   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.074144   12265 kapi.go:107] duration metric: took 1m59.00663205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:42.185287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.197975   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.685826   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.698484   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.185521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.197894   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.684695   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.698444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.184270   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.198072   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.686127   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.697760   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.184583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.197892   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.685284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.698273   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.197597   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.684852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.698234   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.185674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.197778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.684803   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.698286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.185195   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.197536   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.684936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.698202   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.185940   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.198354   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.685628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.698017   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.184172   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.197513   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.684563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.699121   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.185458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.197627   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.684548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.697728   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.184587   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.198088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.687284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.697762   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.185441   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.684856   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.698392   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.184966   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.198309   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.685059   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.697818   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.184799   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.199146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.685287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.697823   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.184982   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.198778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.684629   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.698010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.185306   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.197805   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.686354   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.697789   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.184048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.198685   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.685283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.697967   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.185357   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.198462   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.685857   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.698582   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.185027   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.199070   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.685248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.697584   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.444242   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.542180   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.684941   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.698345   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.184494   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.199673   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.686844   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.701197   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.186108   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.200286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.935418   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.936940   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.185837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.198343   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.685229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.697687   12265 kapi.go:107] duration metric: took 2m23.503933898s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:05.184162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.686162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.184784   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.685596   12265 kapi.go:107] duration metric: took 2m21.504550895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:06.687290   12265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-001438 cluster.
	I0916 10:25:06.688726   12265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:06.689940   12265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:06.691195   12265 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:06.692654   12265 addons.go:510] duration metric: took 2m34.356008246s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:06.692692   12265 start.go:246] waiting for cluster config update ...
	I0916 10:25:06.692714   12265 start.go:255] writing updated cluster config ...
	I0916 10:25:06.692960   12265 ssh_runner.go:195] Run: rm -f paused
	I0916 10:25:06.701459   12265 out.go:177] * Done! kubectl is now configured to use "addons-001438" cluster and "default" namespace by default
	E0916 10:25:06.702711   12265 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.567077993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6e189a5a10b1557891725ce2953b87fe642c75e03ad10a0a5f1c13596efe3dd2,PodSandboxId:b1b6b74be962699d277a04b3a408931dda56ff790e89190b3b8c465fc1a1c89d,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726482281270111445,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-k7c7v,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a,},Annotations:map[string]string{io.kubernetes.container.hash: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256
:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.i
o/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Ima
ge:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Nam
e:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d74
4179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:02338d715f1cc16d179b1e5ae5683a40f5ba6e030f896e49926c1a8ebf578a9e,PodSandboxId:b5f60d9e3d792bb317115b7f4a3ebe60d8e91be6e70805dd73b1e57cee176e13,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482242679814945,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-gjpx4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 25a880f3-4952-4f5d-a5fa-490c826c8645,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fa823da38eaf3b023a308a43323eeae78e39cda87a9c03608aae3282b32a93fe,PodSandboxId:c3f3178bf135880e26d492ff2056055ecaa931c73a35f123693f14a61da787eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482242556926356,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-pq6gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f3fb58c9-0598-468f-98c4-145e36a676e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f147d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Anno
tations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-
provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.containe
r.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,PodSandboxId:b2187ec34496ed99ebd9590db3e3c2f2b16b8d04113461ef6521844e92437cfa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e4
4b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726482192635869199,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6
ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e,PodSandboxId:2644add0af9
1f811b5575408a68436daec5077f948a2b37ad150dcfdb846c86a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726482183538960992,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-jq22w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0731
c5d88d35f1d8b6c88fee881cced713fd9e6231df44c4f03289b577fa75a,PodSandboxId:4cf262411fb7c78bef294b8304a442f15f122eba8e6330163e0f6001e8b44f4c,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1726482181618422606,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-b76fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a96b112c-4171-4416-9e14-ac1f69fd033e,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.
ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:
map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes
.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-
proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001
438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9eaaad7b-d49f-4a5c-aec2-4ebd9e4c8ace name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.568305077Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},},}" file="otel-collector/interceptors.go:62" id=f19cdd72-1353-4562-9b9e-c8775c639685 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.568452567Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b2187ec34496ed99ebd9590db3e3c2f2b16b8d04113461ef6521844e92437cfa,Metadata:&PodSandboxMetadata{Name:registry-proxy-kk7lc,Uid:2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726482157426319996,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 5787bf5f6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:22:37.114927879Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f19cdd72-1353-4562-9b9e-c8775c639685
name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.568826004Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:b2187ec34496ed99ebd9590db3e3c2f2b16b8d04113461ef6521844e92437cfa,Verbose:false,}" file="otel-collector/interceptors.go:62" id=45c41f01-31f1-4ff6-bd76-a0d521d4acb4 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.568938154Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:b2187ec34496ed99ebd9590db3e3c2f2b16b8d04113461ef6521844e92437cfa,Metadata:&PodSandboxMetadata{Name:registry-proxy-kk7lc,Uid:2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726482157426319996,Network:&PodSandboxNetworkStatus{Ip:10.244.0.7,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 5787bf5f6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-09-16T10:22:37.114927879Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=45c41f01-31f1-4ff6-bd76-a0d521d4acb4 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.569264911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},},}" file="otel-collector/interceptors.go:62" id=da975a5c-61d0-4255-b4ac-8693d8b92d1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.569386258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da975a5c-61d0-4255-b4ac-8693d8b92d1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.569460660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,PodSandboxId:b2187ec34496ed99ebd9590db3e3c2f2b16b8d04113461ef6521844e92437cfa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726482192635869199,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.p
orts: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da975a5c-61d0-4255-b4ac-8693d8b92d1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.569750144Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=28c34a99-88c5-4be3-b6e9-bb784a8723ff name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.569868683Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726482192694618175,StartedAt:1726482192726406004,FinishedAt:1726482319496185721,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/containers/registry-proxy/0f13571d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/volumes/kubernetes.io~projected/kube-api-access-l8b7f,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-kk7lc_2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=28c34a99-88c5-4be3-b6e9-bb784a8723ff name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.573250043Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e7b4f0d3-6c14-457d-a5c6-473e2aac9376 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.573389773Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726482192694618175,StartedAt:1726482192726406004,FinishedAt:1726482319496185721,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/containers/registry-proxy/0f13571d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/volumes/kubernetes.io~projected/kube-api-access-l8b7f,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-kk7lc_2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e7b4f0d3-6c14-457d-a5c6-473e2aac9376 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.573520423Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,},},}" file="otel-collector/interceptors.go:62" id=de453a8e-0a2e-4eca-a9b5-9f0c8a4939d1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.573598344Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2644add0af91f811b5575408a68436daec5077f948a2b37ad150dcfdb846c86a,Metadata:&PodSandboxMetadata{Name:registry-66c9cd494c-jq22w,Uid:04e85c00-e6fb-4eee-96aa-273a4f6f273f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726482157283930071,Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-66c9cd494c-jq22w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,kubernetes.io/minikube-addons: registry,pod-template-hash: 66c9cd494c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:22:36.970993913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=de453a8e-0a2e-4eca-a9b5-9f0c8a4939d1 name=/runtime.v1.Runtim
eService/ListPodSandbox
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.574129674Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:2644add0af91f811b5575408a68436daec5077f948a2b37ad150dcfdb846c86a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ee16cdcd-2da8-49fc-8e3f-5da91c88419c name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.574290199Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=23b2b89f-623d-446c-beeb-d15c27380a84 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.574753003Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726482192694618175,StartedAt:1726482192726406004,FinishedAt:1726482319496185721,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kk7lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f0e1170-c654-4939-91ca-cd5b2bf6ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/containers/registry-proxy/0f13571d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/volumes/kubernetes.io~projected/kube-api-access-l8b7f,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-kk7lc_2f0e1170-c654-4939-91ca-cd5b2bf6ae2a/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=23b2b89f-623d-446c-beeb-d15c27380a84 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.575030983Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:2644add0af91f811b5575408a68436daec5077f948a2b37ad150dcfdb846c86a,Metadata:&PodSandboxMetadata{Name:registry-66c9cd494c-jq22w,Uid:04e85c00-e6fb-4eee-96aa-273a4f6f273f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726482157283930071,Network:&PodSandboxNetworkStatus{Ip:10.244.0.6,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-66c9cd494c-jq22w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,kubernetes.io/minikube-addons: registry,pod-template-hash: 66c9cd494c,},Annotations:map[string]string{kuberne
tes.io/config.seen: 2024-09-16T10:22:36.970993913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=ee16cdcd-2da8-49fc-8e3f-5da91c88419c name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.576412938Z" level=debug msg="Request: &RemoveContainerRequest{ContainerId:a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0,}" file="otel-collector/interceptors.go:62" id=26b8d16c-ef31-4a2a-9600-ad8a4a438128 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.576628460Z" level=info msg="Removing container: a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0" file="server/container_remove.go:24" id=26b8d16c-ef31-4a2a-9600-ad8a4a438128 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.576421392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,},},}" file="otel-collector/interceptors.go:62" id=b9c69f6b-1ba5-4d5f-8975-4c53340bef2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.577223489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9c69f6b-1ba5-4d5f-8975-4c53340bef2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.577429844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e,PodSandboxId:2644add0af91f811b5575408a68436daec5077f948a2b37ad150dcfdb846c86a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726482183538960992,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-jq22w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerP
ort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9c69f6b-1ba5-4d5f-8975-4c53340bef2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.578782144Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=37a84bd4-8603-4b9b-ab28-3049c1391598 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:25:20 addons-001438 crio[662]: time="2024-09-16 10:25:20.579090612Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726482183600017958,StartedAt:1726482183623496645,FinishedAt:1726482319475184890,ExitCode:2,Image:&ImageSpec{Image:docker.io/library/registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-jq22w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e85c00-e6fb-4eee-96aa-273a4f6f273f,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"cont
ainerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/04e85c00-e6fb-4eee-96aa-273a4f6f273f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/04e85c00-e6fb-4eee-96aa-273a4f6f273f/containers/registry/f92f6a39,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/04e85c00-e6fb-4eee-96aa-273a4f6f273f/volumes/kubernetes.io~projected/kube-api-access-j8v8q,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidM
appings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-66c9cd494c-jq22w_04e85c00-e6fb-4eee-96aa-273a4f6f273f/registry/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=37a84bd4-8603-4b9b-ab28-3049c1391598 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c0c62d19fc341       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 14 seconds ago       Running             gcp-auth                                 0                   81638f0641649       gcp-auth-89d5ffd79-jg5wz
	4d9f00ee52087       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             16 seconds ago       Running             controller                               0                   f0a70a6b5b4fa       ingress-nginx-controller-bc57996ff-jhd4w
	6e189a5a10b15       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            39 seconds ago       Exited              gadget                                   2                   b1b6b74be9626       gadget-k7c7v
	a4ff4f2e6c350       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          39 seconds ago       Running             csi-snapshotter                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	fa45fa1d889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	112e37da6f1b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	bcd9404de3e14       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	26165c7625a62       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	02338d715f1cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              patch                                    0                   b5f60d9e3d792       gcp-auth-certs-patch-gjpx4
	fa823da38eaf3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              create                                   0                   c3f3178bf1358       gcp-auth-certs-create-pq6gw
	35e24c1abefe7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   bf02d50932f14       csi-hostpath-resizer-0
	a5edaf3e2dd3d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	b8ebd2f050729       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   f375334740e2f       csi-hostpath-attacher-0
	0d52d2269e100       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             About a minute ago   Exited              patch                                    1                   6fe91ac2288fe       ingress-nginx-admission-patch-rls9n
	54c4347a1fc2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   About a minute ago   Exited              create                                   0                   d66b1317412a7       ingress-nginx-admission-create-dk6l8
	f0bde3324c47d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   0eef20d1c6813       snapshot-controller-56fcc65765-pv2sr
	f786c20ceffe3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   ec33782f42717       snapshot-controller-56fcc65765-8nq94
	d997d75b48ee4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             2 minutes ago        Running             local-path-provisioner                   0                   173b48ab2ab7f       local-path-provisioner-86d989889c-rj67m
	0024bbca27aac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        2 minutes ago        Running             metrics-server                           0                   8bcb0a4a20a5a       metrics-server-84c5f94fbc-9hj9f
	e13f898473193       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               2 minutes ago        Running             cloud-spanner-emulator                   0                   c90a44c7edea8       cloud-spanner-emulator-769b77f747-58ll2
	a0731c5d88d35       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   4cf262411fb7c       tiller-deploy-b48cc5f79-b76fb
	8193aad1beb5b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             2 minutes ago        Running             minikube-ingress-dns                     0                   f1a3772ce5f7d       kube-ingress-dns-minikube
	20d2f3360f320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   748d363148f66       storage-provisioner
	63d270cbed8d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             2 minutes ago        Running             coredns                                  0                   42b8586a7b29a       coredns-7c65d6cfc9-j5ndn
	60269ac0552c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             2 minutes ago        Running             kube-proxy                               0                   2bf9dc368debd       kube-proxy-66flj
	1aabe5cb48f97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             2 minutes ago        Running             etcd                                     0                   f7aeaa11c7f4c       etcd-addons-001438
	2d34a4e3596c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             2 minutes ago        Running             kube-controller-manager                  0                   8a68216be6dee       kube-controller-manager-addons-001438
	bfff5b2d37985       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             2 minutes ago        Running             kube-apiserver                           0                   81f095a38dae1       kube-apiserver-addons-001438
	5a4816dc33e76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             2 minutes ago        Running             kube-scheduler                           0                   ec134844260ab       kube-scheduler-addons-001438
	
	
	==> coredns [63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce] <==
	[INFO] 127.0.0.1:32820 - 49588 "HINFO IN 5683833228926934535.5808779734602365342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027869673s
	[INFO] 10.244.0.7:47242 - 15842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000350783s
	[INFO] 10.244.0.7:47242 - 29412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155576s
	[INFO] 10.244.0.7:51495 - 23321 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115255s
	[INFO] 10.244.0.7:51495 - 47135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085371s
	[INFO] 10.244.0.7:40689 - 10301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114089s
	[INFO] 10.244.0.7:40689 - 30779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011843s
	[INFO] 10.244.0.7:53526 - 19539 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127604s
	[INFO] 10.244.0.7:53526 - 34381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109337s
	[INFO] 10.244.0.7:39182 - 43658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075802s
	[INFO] 10.244.0.7:39182 - 55433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031766s
	[INFO] 10.244.0.7:52628 - 35000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037386s
	[INFO] 10.244.0.7:52628 - 44218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027585s
	[INFO] 10.244.0.7:47656 - 61837 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028204s
	[INFO] 10.244.0.7:47656 - 39571 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027731s
	[INFO] 10.244.0.7:53964 - 36235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098663s
	[INFO] 10.244.0.7:53964 - 55690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045022s
	[INFO] 10.244.0.22:49146 - 11336 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543634s
	[INFO] 10.244.0.22:44900 - 51750 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125434s
	[INFO] 10.244.0.22:47266 - 27362 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158517s
	[INFO] 10.244.0.22:53077 - 63050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068888s
	[INFO] 10.244.0.22:52796 - 34381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101059s
	[INFO] 10.244.0.22:52167 - 15594 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126468s
	[INFO] 10.244.0.22:42107 - 54869 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149176s
	[INFO] 10.244.0.22:60865 - 20616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006078154s
	
	
	==> describe nodes <==
	Name:               addons-001438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-001438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-001438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-001438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-001438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-001438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:25:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:25:01 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:25:01 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:25:01 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:25:01 +0000   Mon, 16 Sep 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-001438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b69a913a20a4259950d0bf801229c28
	  System UUID:                8b69a913-a20a-4259-950d-0bf801229c28
	  Boot ID:                    7d21de27-dd4e-4002-9fc0-df14a0ff761f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-58ll2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  gadget                      gadget-k7c7v                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  gcp-auth                    gcp-auth-89d5ffd79-jg5wz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  headlamp                    headlamp-57fb76fcdb-cqlgq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jhd4w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         2m40s
	  kube-system                 coredns-7c65d6cfc9-j5ndn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m47s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 csi-hostpathplugin-xgk62                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 etcd-addons-001438                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m53s
	  kube-system                 kube-apiserver-addons-001438                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-controller-manager-addons-001438       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-proxy-66flj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  kube-system                 kube-scheduler-addons-001438                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 metrics-server-84c5f94fbc-9hj9f             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         2m42s
	  kube-system                 snapshot-controller-56fcc65765-8nq94        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 snapshot-controller-56fcc65765-pv2sr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 tiller-deploy-b48cc5f79-b76fb               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  local-path-storage          local-path-provisioner-86d989889c-rj67m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jnpkm              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     2m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m44s  kube-proxy       
	  Normal  Starting                 2m53s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m53s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m52s  kubelet          Node addons-001438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s  kubelet          Node addons-001438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s  kubelet          Node addons-001438 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m51s  kubelet          Node addons-001438 status is now: NodeReady
	  Normal  RegisteredNode           2m48s  node-controller  Node addons-001438 event: Registered Node addons-001438 in Controller
	
	
	==> dmesg <==
	[  +0.058324] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060369] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.175342] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.116289] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.270363] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.002627] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.196359] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +0.061696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999876] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.091472] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.774952] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +1.497885] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.466780] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.018877] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.254117] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.876932] kauditd_printk_skb: 7 callbacks suppressed
	[ +33.888489] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.263650] kauditd_printk_skb: 76 callbacks suppressed
	[ +48.109785] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 10:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.297596] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.818881] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.121137] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84] <==
	{"level":"info","ts":"2024-09-16T10:23:26.559766Z","caller":"traceutil/trace.go:171","msg":"trace[846326006] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"127.967786ms","start":"2024-09-16T10:23:26.431776Z","end":"2024-09-16T10:23:26.559744Z","steps":["trace[846326006] 'process raft request'  (duration: 127.810752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:23:57.094852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.563649ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17902813448179803153 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-admission-create-dk6l8.17f5b2727c8db6b7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-create-dk6l8.17f5b2727c8db6b7\" value_size:871 lease:8679441411325026982 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:23:57.095039Z","caller":"traceutil/trace.go:171","msg":"trace[1740724643] transaction","detail":"{read_only:false; response_revision:1040; number_of_response:1; }","duration":"132.752806ms","start":"2024-09-16T10:23:56.962265Z","end":"2024-09-16T10:23:57.095018Z","steps":["trace[1740724643] 'process raft request'  (duration: 15.756926ms)","trace[1740724643] 'compare'  (duration: 116.28099ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:24:38.003314Z","caller":"traceutil/trace.go:171","msg":"trace[1663412122] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"156.644895ms","start":"2024-09-16T10:24:37.846648Z","end":"2024-09-16T10:24:38.003293Z","steps":["trace[1663412122] 'process raft request'  (duration: 156.521883ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.421875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.383861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:01.421953Z","caller":"traceutil/trace.go:171","msg":"trace[402931173] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1248; }","duration":"345.518312ms","start":"2024-09-16T10:25:01.076421Z","end":"2024-09-16T10:25:01.421939Z","steps":["trace[402931173] 'range keys from in-memory index tree'  (duration: 345.280419ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.421990Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:01.076386Z","time spent":"345.594163ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-16T10:25:01.422158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.250548ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.422198Z","caller":"traceutil/trace.go:171","msg":"trace[2105848494] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1248; }","duration":"190.301041ms","start":"2024-09-16T10:25:01.231889Z","end":"2024-09-16T10:25:01.422190Z","steps":["trace[2105848494] 'range keys from in-memory index tree'  (duration: 190.24488ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.423722Z","caller":"traceutil/trace.go:171","msg":"trace[1526018823] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"284.258855ms","start":"2024-09-16T10:25:01.139452Z","end":"2024-09-16T10:25:01.423711Z","steps":["trace[1526018823] 'process raft request'  (duration: 284.165558ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.424593Z","caller":"traceutil/trace.go:171","msg":"trace[1620023283] linearizableReadLoop","detail":"{readStateIndex:1296; appliedIndex:1296; }","duration":"253.838283ms","start":"2024-09-16T10:25:01.170745Z","end":"2024-09-16T10:25:01.424583Z","steps":["trace[1620023283] 'read index received'  (duration: 253.835456ms)","trace[1620023283] 'applied index is now lower than readState.Index'  (duration: 2.263µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:01.424681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.948565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.424719Z","caller":"traceutil/trace.go:171","msg":"trace[1658095100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"253.992891ms","start":"2024-09-16T10:25:01.170719Z","end":"2024-09-16T10:25:01.424712Z","steps":["trace[1658095100] 'agreement among raft nodes before linearized reading'  (duration: 253.933158ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.430878Z","caller":"traceutil/trace.go:171","msg":"trace[196824448] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"219.615242ms","start":"2024-09-16T10:25:01.211190Z","end":"2024-09-16T10:25:01.430805Z","steps":["trace[196824448] 'process raft request'  (duration: 217.799649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.432286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.218738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.432549Z","caller":"traceutil/trace.go:171","msg":"trace[1250515915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"248.433899ms","start":"2024-09-16T10:25:01.183901Z","end":"2024-09-16T10:25:01.432335Z","steps":["trace[1250515915] 'agreement among raft nodes before linearized reading'  (duration: 246.789324ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917472Z","caller":"traceutil/trace.go:171","msg":"trace[1132617141] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"256.411132ms","start":"2024-09-16T10:25:03.661047Z","end":"2024-09-16T10:25:03.917458Z","steps":["trace[1132617141] 'read index received'  (duration: 256.216658ms)","trace[1132617141] 'applied index is now lower than readState.Index'  (duration: 193.873µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:03.917646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.564415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917689Z","caller":"traceutil/trace.go:171","msg":"trace[1681803764] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1254; }","duration":"256.635309ms","start":"2024-09-16T10:25:03.661043Z","end":"2024-09-16T10:25:03.917678Z","steps":["trace[1681803764] 'agreement among raft nodes before linearized reading'  (duration: 256.524591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.498369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917721Z","caller":"traceutil/trace.go:171","msg":"trace[320039730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"246.52737ms","start":"2024-09-16T10:25:03.671187Z","end":"2024-09-16T10:25:03.917715Z","steps":["trace[320039730] 'agreement among raft nodes before linearized reading'  (duration: 246.484981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.395252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917834Z","caller":"traceutil/trace.go:171","msg":"trace[699037525] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"461.96825ms","start":"2024-09-16T10:25:03.455860Z","end":"2024-09-16T10:25:03.917828Z","steps":["trace[699037525] 'process raft request'  (duration: 461.454179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917838Z","caller":"traceutil/trace.go:171","msg":"trace[618256897] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"234.40851ms","start":"2024-09-16T10:25:03.683425Z","end":"2024-09-16T10:25:03.917833Z","steps":["trace[618256897] 'agreement among raft nodes before linearized reading'  (duration: 234.386479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:03.455845Z","time spent":"462.003063ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7] <==
	2024/09/16 10:25:06 GCP Auth Webhook started!
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	
	
	==> kernel <==
	 10:25:20 up 3 min,  0 users,  load average: 1.27, 1.07, 0.47
	Linux addons-001438 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77] <==
	I0916 10:22:40.795031       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.108.13.142"}
	I0916 10:22:40.844880       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.102.39.17"}
	I0916 10:22:40.932409       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0916 10:22:42.426039       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.146.100"}
	I0916 10:22:42.456409       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:22:42.660969       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.102.193"}
	I0916 10:22:44.945009       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.134.141"}
	W0916 10:23:38.948410       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.948711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:23:38.949896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:23:38.958493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.958543       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 10:23:38.959752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0916 10:24:18.395108       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:18.395300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:18.395442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 10:24:18.398244       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	I0916 10:24:18.453414       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:25:09.633337       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.80.80"}
	
	
	==> kube-controller-manager [2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3] <==
	I0916 10:24:05.734327       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:06.991945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:24:07.859857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="53.401µs"
	I0916 10:24:18.387052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.892497ms"
	I0916 10:24:18.387430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="104.805µs"
	I0916 10:24:30.697430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:24:35.019698       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:24:35.020136       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:24:35.086219       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:24:35.087588       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:24:53.864819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="57.647µs"
	I0916 10:25:01.430275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:25:04.459017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="97.149µs"
	I0916 10:25:06.488269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.118642ms"
	I0916 10:25:06.489287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="42.711µs"
	I0916 10:25:07.863123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="72.138µs"
	I0916 10:25:09.687063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="25.765664ms"
	E0916 10:25:09.687144       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0916 10:25:09.731163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.235103ms"
	I0916 10:25:09.753608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="22.282725ms"
	I0916 10:25:09.753862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="122.927µs"
	I0916 10:25:09.762905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.16µs"
	I0916 10:25:16.878158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="16.26286ms"
	I0916 10:25:16.878254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="50.754µs"
	I0916 10:25:19.390322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.132µs"
	
	
	==> kube-proxy [60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:22:35.282699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:22:35.409784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0916 10:22:35.409847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:22:36.135283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:22:36.135476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:22:36.135545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:22:36.146626       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:22:36.146849       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:22:36.146861       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:22:36.156579       1 config.go:199] "Starting service config controller"
	I0916 10:22:36.156604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:22:36.166809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:22:36.166838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:22:36.168180       1 config.go:328] "Starting node config controller"
	I0916 10:22:36.168189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:22:36.258515       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:22:36.268518       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:22:36.268639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237] <==
	W0916 10:22:25.363221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:25.363254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:22:25.363420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:22:25.363573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:22:25.363425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:25.363941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.174422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:22:26.174473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.225213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:26.225308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.333904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:22:26.333957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.350221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:22:26.350326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.406843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:26.406982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.446248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:22:26.446395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.547116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:22:26.547206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.704254       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:22:26.704303       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:22:28.953769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:25:13 addons-001438 kubelet[1200]: I0916 10:25:13.839564    1200 scope.go:117] "RemoveContainer" containerID="6e189a5a10b1557891725ce2953b87fe642c75e03ad10a0a5f1c13596efe3dd2"
	Sep 16 10:25:15 addons-001438 kubelet[1200]: I0916 10:25:15.669729    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfpv\" (UniqueName: \"kubernetes.io/projected/83260537-f74d-40a8-bcbc-db785a97aac8-kube-api-access-9mfpv\") pod \"83260537-f74d-40a8-bcbc-db785a97aac8\" (UID: \"83260537-f74d-40a8-bcbc-db785a97aac8\") "
	Sep 16 10:25:15 addons-001438 kubelet[1200]: I0916 10:25:15.669809    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/83260537-f74d-40a8-bcbc-db785a97aac8-device-plugin\") pod \"83260537-f74d-40a8-bcbc-db785a97aac8\" (UID: \"83260537-f74d-40a8-bcbc-db785a97aac8\") "
	Sep 16 10:25:15 addons-001438 kubelet[1200]: I0916 10:25:15.669931    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83260537-f74d-40a8-bcbc-db785a97aac8-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "83260537-f74d-40a8-bcbc-db785a97aac8" (UID: "83260537-f74d-40a8-bcbc-db785a97aac8"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:25:15 addons-001438 kubelet[1200]: I0916 10:25:15.675147    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83260537-f74d-40a8-bcbc-db785a97aac8-kube-api-access-9mfpv" (OuterVolumeSpecName: "kube-api-access-9mfpv") pod "83260537-f74d-40a8-bcbc-db785a97aac8" (UID: "83260537-f74d-40a8-bcbc-db785a97aac8"). InnerVolumeSpecName "kube-api-access-9mfpv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:25:15 addons-001438 kubelet[1200]: I0916 10:25:15.770897    1200 reconciler_common.go:288] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/83260537-f74d-40a8-bcbc-db785a97aac8-device-plugin\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:25:15 addons-001438 kubelet[1200]: I0916 10:25:15.770930    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9mfpv\" (UniqueName: \"kubernetes.io/projected/83260537-f74d-40a8-bcbc-db785a97aac8-kube-api-access-9mfpv\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:25:16 addons-001438 kubelet[1200]: I0916 10:25:16.536303    1200 scope.go:117] "RemoveContainer" containerID="21539455d03c85dda881bc87a221870000776d83ad4059dce0995011c28d10a2"
	Sep 16 10:25:17 addons-001438 kubelet[1200]: I0916 10:25:17.847129    1200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83260537-f74d-40a8-bcbc-db785a97aac8" path="/var/lib/kubelet/pods/83260537-f74d-40a8-bcbc-db785a97aac8/volumes"
	Sep 16 10:25:18 addons-001438 kubelet[1200]: E0916 10:25:18.122053    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482318121551354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:445057,},InodesUsed:&UInt64Value{Value:161,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:25:18 addons-001438 kubelet[1200]: E0916 10:25:18.122095    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482318121551354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:445057,},InodesUsed:&UInt64Value{Value:161,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:25:19 addons-001438 kubelet[1200]: I0916 10:25:19.904826    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8v8q\" (UniqueName: \"kubernetes.io/projected/04e85c00-e6fb-4eee-96aa-273a4f6f273f-kube-api-access-j8v8q\") pod \"04e85c00-e6fb-4eee-96aa-273a4f6f273f\" (UID: \"04e85c00-e6fb-4eee-96aa-273a4f6f273f\") "
	Sep 16 10:25:19 addons-001438 kubelet[1200]: I0916 10:25:19.909225    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e85c00-e6fb-4eee-96aa-273a4f6f273f-kube-api-access-j8v8q" (OuterVolumeSpecName: "kube-api-access-j8v8q") pod "04e85c00-e6fb-4eee-96aa-273a4f6f273f" (UID: "04e85c00-e6fb-4eee-96aa-273a4f6f273f"). InnerVolumeSpecName "kube-api-access-j8v8q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.006210    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8b7f\" (UniqueName: \"kubernetes.io/projected/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a-kube-api-access-l8b7f\") pod \"2f0e1170-c654-4939-91ca-cd5b2bf6ae2a\" (UID: \"2f0e1170-c654-4939-91ca-cd5b2bf6ae2a\") "
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.006419    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j8v8q\" (UniqueName: \"kubernetes.io/projected/04e85c00-e6fb-4eee-96aa-273a4f6f273f-kube-api-access-j8v8q\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.008821    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a-kube-api-access-l8b7f" (OuterVolumeSpecName: "kube-api-access-l8b7f") pod "2f0e1170-c654-4939-91ca-cd5b2bf6ae2a" (UID: "2f0e1170-c654-4939-91ca-cd5b2bf6ae2a"). InnerVolumeSpecName "kube-api-access-l8b7f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.107007    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l8b7f\" (UniqueName: \"kubernetes.io/projected/2f0e1170-c654-4939-91ca-cd5b2bf6ae2a-kube-api-access-l8b7f\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.570176    1200 scope.go:117] "RemoveContainer" containerID="a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.637620    1200 scope.go:117] "RemoveContainer" containerID="a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: E0916 10:25:20.638633    1200 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0\": container with ID starting with a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0 not found: ID does not exist" containerID="a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.638883    1200 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0"} err="failed to get container status \"a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0\": rpc error: code = NotFound desc = could not find container \"a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0\": container with ID starting with a2e39517e0d9a4af2709133426021d76416dd00626ef2655806fc4c8947b7fc0 not found: ID does not exist"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.639327    1200 scope.go:117] "RemoveContainer" containerID="b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.657776    1200 scope.go:117] "RemoveContainer" containerID="b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: E0916 10:25:20.658563    1200 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e\": container with ID starting with b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e not found: ID does not exist" containerID="b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e"
	Sep 16 10:25:20 addons-001438 kubelet[1200]: I0916 10:25:20.658607    1200 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e"} err="failed to get container status \"b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e\": rpc error: code = NotFound desc = could not find container \"b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e\": container with ID starting with b6ccf4995572b004d68271e7028c464c6597a9c783668d69cc9e3293cc70a00e not found: ID does not exist"
	
	
	==> storage-provisioner [20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e] <==
	I0916 10:22:41.307950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:22:41.369058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:22:41.369154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:22:41.391597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:22:41.391782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	I0916 10:22:41.394290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b3cde4-08a8-47d7-a9cc-7251679ab4d1", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b became leader
	I0916 10:22:41.492688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (336.816µs)
helpers_test.go:263: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Registry (12.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (2.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-001438 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:209: (dbg) Non-zero exit: kubectl --context addons-001438 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: fork/exec /usr/local/bin/kubectl: exec format error (322.395µs)
addons_test.go:210: failed waiting for ingress-nginx-controller : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-001438 -n addons-001438
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 logs -n 25: (1.347785111s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | --download-only -p                   | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-928489                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42715               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-928489              | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                  | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| start   | -p addons-001438 --wait=true         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	| ip      | addons-001438 ip                     | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:42.990297   12265 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:42.990427   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990438   12265 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:42.990444   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990619   12265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:42.991237   12265 out.go:352] Setting JSON to false
	I0916 10:21:42.992075   12265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":253,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:42.992165   12265 start.go:139] virtualization: kvm guest
	I0916 10:21:42.994057   12265 out.go:177] * [addons-001438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:42.995363   12265 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:21:42.995366   12265 notify.go:220] Checking for updates...
	I0916 10:21:42.996620   12265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:42.997884   12265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:42.999244   12265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.000448   12265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:21:43.001744   12265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:21:43.003140   12265 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:43.035292   12265 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:21:43.036591   12265 start.go:297] selected driver: kvm2
	I0916 10:21:43.036604   12265 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:43.036617   12265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:21:43.037618   12265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.037687   12265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:43.052612   12265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:43.052654   12265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:43.052880   12265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:21:43.052910   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:21:43.052948   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:43.052956   12265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:43.053000   12265 start.go:340] cluster config:
	{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:43.053089   12265 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.054779   12265 out.go:177] * Starting "addons-001438" primary control-plane node in "addons-001438" cluster
	I0916 10:21:43.056048   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:21:43.056073   12265 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:43.056099   12265 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:43.056171   12265 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:21:43.056181   12265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:21:43.056464   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:21:43.056479   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json: {Name:mke7feffe145119f1110e818375562c2195d4fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:21:43.056601   12265 start.go:360] acquireMachinesLock for addons-001438: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:21:43.056638   12265 start.go:364] duration metric: took 25.099µs to acquireMachinesLock for "addons-001438"
	I0916 10:21:43.056653   12265 start.go:93] Provisioning new machine with config: &{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:21:43.056703   12265 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:21:43.058226   12265 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:21:43.058340   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:21:43.058376   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:21:43.072993   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0916 10:21:43.073475   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:21:43.073995   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:21:43.074020   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:21:43.074422   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:21:43.074620   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:21:43.074787   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:21:43.074946   12265 start.go:159] libmachine.API.Create for "addons-001438" (driver="kvm2")
	I0916 10:21:43.074989   12265 client.go:168] LocalClient.Create starting
	I0916 10:21:43.075021   12265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:21:43.311518   12265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:21:43.475888   12265 main.go:141] libmachine: Running pre-create checks...
	I0916 10:21:43.475917   12265 main.go:141] libmachine: (addons-001438) Calling .PreCreateCheck
	I0916 10:21:43.476396   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:21:43.476796   12265 main.go:141] libmachine: Creating machine...
	I0916 10:21:43.476809   12265 main.go:141] libmachine: (addons-001438) Calling .Create
	I0916 10:21:43.476954   12265 main.go:141] libmachine: (addons-001438) Creating KVM machine...
	I0916 10:21:43.478137   12265 main.go:141] libmachine: (addons-001438) DBG | found existing default KVM network
	I0916 10:21:43.478893   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.478751   12287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0916 10:21:43.478937   12265 main.go:141] libmachine: (addons-001438) DBG | created network xml: 
	I0916 10:21:43.478958   12265 main.go:141] libmachine: (addons-001438) DBG | <network>
	I0916 10:21:43.478967   12265 main.go:141] libmachine: (addons-001438) DBG |   <name>mk-addons-001438</name>
	I0916 10:21:43.478974   12265 main.go:141] libmachine: (addons-001438) DBG |   <dns enable='no'/>
	I0916 10:21:43.478986   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.478998   12265 main.go:141] libmachine: (addons-001438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:21:43.479006   12265 main.go:141] libmachine: (addons-001438) DBG |     <dhcp>
	I0916 10:21:43.479018   12265 main.go:141] libmachine: (addons-001438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:21:43.479026   12265 main.go:141] libmachine: (addons-001438) DBG |     </dhcp>
	I0916 10:21:43.479036   12265 main.go:141] libmachine: (addons-001438) DBG |   </ip>
	I0916 10:21:43.479087   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.479109   12265 main.go:141] libmachine: (addons-001438) DBG | </network>
	I0916 10:21:43.479150   12265 main.go:141] libmachine: (addons-001438) DBG | 
	I0916 10:21:43.484546   12265 main.go:141] libmachine: (addons-001438) DBG | trying to create private KVM network mk-addons-001438 192.168.39.0/24...
	I0916 10:21:43.547822   12265 main.go:141] libmachine: (addons-001438) DBG | private KVM network mk-addons-001438 192.168.39.0/24 created
	I0916 10:21:43.547845   12265 main.go:141] libmachine: (addons-001438) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.547862   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.547813   12287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.547875   12265 main.go:141] libmachine: (addons-001438) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:43.547936   12265 main.go:141] libmachine: (addons-001438) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:21:43.797047   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.796916   12287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa...
	I0916 10:21:43.906021   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.905909   12287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk...
	I0916 10:21:43.906051   12265 main.go:141] libmachine: (addons-001438) DBG | Writing magic tar header
	I0916 10:21:43.906060   12265 main.go:141] libmachine: (addons-001438) DBG | Writing SSH key tar header
	I0916 10:21:43.906067   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.906027   12287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.906123   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438
	I0916 10:21:43.906172   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 (perms=drwx------)
	I0916 10:21:43.906194   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:21:43.906204   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:21:43.906222   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:21:43.906230   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.906236   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:21:43.906243   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:21:43.906248   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:21:43.906258   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:43.906264   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:21:43.906275   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:21:43.906309   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:21:43.906325   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home
	I0916 10:21:43.906338   12265 main.go:141] libmachine: (addons-001438) DBG | Skipping /home - not owner
	I0916 10:21:43.907204   12265 main.go:141] libmachine: (addons-001438) define libvirt domain using xml: 
	I0916 10:21:43.907223   12265 main.go:141] libmachine: (addons-001438) <domain type='kvm'>
	I0916 10:21:43.907235   12265 main.go:141] libmachine: (addons-001438)   <name>addons-001438</name>
	I0916 10:21:43.907246   12265 main.go:141] libmachine: (addons-001438)   <memory unit='MiB'>4000</memory>
	I0916 10:21:43.907255   12265 main.go:141] libmachine: (addons-001438)   <vcpu>2</vcpu>
	I0916 10:21:43.907265   12265 main.go:141] libmachine: (addons-001438)   <features>
	I0916 10:21:43.907274   12265 main.go:141] libmachine: (addons-001438)     <acpi/>
	I0916 10:21:43.907282   12265 main.go:141] libmachine: (addons-001438)     <apic/>
	I0916 10:21:43.907294   12265 main.go:141] libmachine: (addons-001438)     <pae/>
	I0916 10:21:43.907307   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907318   12265 main.go:141] libmachine: (addons-001438)   </features>
	I0916 10:21:43.907327   12265 main.go:141] libmachine: (addons-001438)   <cpu mode='host-passthrough'>
	I0916 10:21:43.907337   12265 main.go:141] libmachine: (addons-001438)   
	I0916 10:21:43.907349   12265 main.go:141] libmachine: (addons-001438)   </cpu>
	I0916 10:21:43.907364   12265 main.go:141] libmachine: (addons-001438)   <os>
	I0916 10:21:43.907373   12265 main.go:141] libmachine: (addons-001438)     <type>hvm</type>
	I0916 10:21:43.907383   12265 main.go:141] libmachine: (addons-001438)     <boot dev='cdrom'/>
	I0916 10:21:43.907392   12265 main.go:141] libmachine: (addons-001438)     <boot dev='hd'/>
	I0916 10:21:43.907402   12265 main.go:141] libmachine: (addons-001438)     <bootmenu enable='no'/>
	I0916 10:21:43.907415   12265 main.go:141] libmachine: (addons-001438)   </os>
	I0916 10:21:43.907427   12265 main.go:141] libmachine: (addons-001438)   <devices>
	I0916 10:21:43.907435   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='cdrom'>
	I0916 10:21:43.907452   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/boot2docker.iso'/>
	I0916 10:21:43.907463   12265 main.go:141] libmachine: (addons-001438)       <target dev='hdc' bus='scsi'/>
	I0916 10:21:43.907489   12265 main.go:141] libmachine: (addons-001438)       <readonly/>
	I0916 10:21:43.907508   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907518   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='disk'>
	I0916 10:21:43.907531   12265 main.go:141] libmachine: (addons-001438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:21:43.907547   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk'/>
	I0916 10:21:43.907558   12265 main.go:141] libmachine: (addons-001438)       <target dev='hda' bus='virtio'/>
	I0916 10:21:43.907568   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907583   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907595   12265 main.go:141] libmachine: (addons-001438)       <source network='mk-addons-001438'/>
	I0916 10:21:43.907606   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907616   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907624   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907634   12265 main.go:141] libmachine: (addons-001438)       <source network='default'/>
	I0916 10:21:43.907645   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907667   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907687   12265 main.go:141] libmachine: (addons-001438)     <serial type='pty'>
	I0916 10:21:43.907697   12265 main.go:141] libmachine: (addons-001438)       <target port='0'/>
	I0916 10:21:43.907706   12265 main.go:141] libmachine: (addons-001438)     </serial>
	I0916 10:21:43.907717   12265 main.go:141] libmachine: (addons-001438)     <console type='pty'>
	I0916 10:21:43.907735   12265 main.go:141] libmachine: (addons-001438)       <target type='serial' port='0'/>
	I0916 10:21:43.907745   12265 main.go:141] libmachine: (addons-001438)     </console>
	I0916 10:21:43.907758   12265 main.go:141] libmachine: (addons-001438)     <rng model='virtio'>
	I0916 10:21:43.907772   12265 main.go:141] libmachine: (addons-001438)       <backend model='random'>/dev/random</backend>
	I0916 10:21:43.907777   12265 main.go:141] libmachine: (addons-001438)     </rng>
	I0916 10:21:43.907785   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907794   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907804   12265 main.go:141] libmachine: (addons-001438)   </devices>
	I0916 10:21:43.907814   12265 main.go:141] libmachine: (addons-001438) </domain>
	I0916 10:21:43.907826   12265 main.go:141] libmachine: (addons-001438) 
	I0916 10:21:43.913322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:98:e7:17 in network default
	I0916 10:21:43.913924   12265 main.go:141] libmachine: (addons-001438) Ensuring networks are active...
	I0916 10:21:43.913942   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:43.914588   12265 main.go:141] libmachine: (addons-001438) Ensuring network default is active
	I0916 10:21:43.914879   12265 main.go:141] libmachine: (addons-001438) Ensuring network mk-addons-001438 is active
	I0916 10:21:43.915337   12265 main.go:141] libmachine: (addons-001438) Getting domain xml...
	I0916 10:21:43.915987   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:45.289678   12265 main.go:141] libmachine: (addons-001438) Waiting to get IP...
	I0916 10:21:45.290387   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.290811   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.290836   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.290776   12287 retry.go:31] will retry after 253.823507ms: waiting for machine to come up
	I0916 10:21:45.546308   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.546737   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.546757   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.546713   12287 retry.go:31] will retry after 316.98215ms: waiting for machine to come up
	I0916 10:21:45.865275   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.865712   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.865742   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.865673   12287 retry.go:31] will retry after 438.875906ms: waiting for machine to come up
	I0916 10:21:46.306361   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.306829   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.306854   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.306787   12287 retry.go:31] will retry after 378.922529ms: waiting for machine to come up
	I0916 10:21:46.687272   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.687683   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.687718   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.687648   12287 retry.go:31] will retry after 695.664658ms: waiting for machine to come up
	I0916 10:21:47.384623   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:47.385017   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:47.385044   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:47.384985   12287 retry.go:31] will retry after 669.1436ms: waiting for machine to come up
	I0916 10:21:48.056603   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.057159   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.057183   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.057099   12287 retry.go:31] will retry after 739.217064ms: waiting for machine to come up
	I0916 10:21:48.798348   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.798788   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.798824   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.798748   12287 retry.go:31] will retry after 963.828739ms: waiting for machine to come up
	I0916 10:21:49.763677   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:49.764095   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:49.764120   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:49.764043   12287 retry.go:31] will retry after 1.625531991s: waiting for machine to come up
	I0916 10:21:51.391980   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:51.392322   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:51.392343   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:51.392285   12287 retry.go:31] will retry after 1.960554167s: waiting for machine to come up
	I0916 10:21:53.354469   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:53.354989   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:53.355016   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:53.354937   12287 retry.go:31] will retry after 2.035806393s: waiting for machine to come up
	I0916 10:21:55.393065   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:55.393432   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:55.393451   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:55.393400   12287 retry.go:31] will retry after 3.028756428s: waiting for machine to come up
	I0916 10:21:58.424174   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:58.424544   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:58.424577   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:58.424517   12287 retry.go:31] will retry after 3.769682763s: waiting for machine to come up
	I0916 10:22:02.198084   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:02.198470   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:22:02.198492   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:22:02.198430   12287 retry.go:31] will retry after 5.547519077s: waiting for machine to come up
	I0916 10:22:07.750830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751191   12265 main.go:141] libmachine: (addons-001438) Found IP for machine: 192.168.39.72
	I0916 10:22:07.751209   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has current primary IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751215   12265 main.go:141] libmachine: (addons-001438) Reserving static IP address...
	I0916 10:22:07.751548   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "addons-001438", mac: "52:54:00:9c:55:19", ip: "192.168.39.72"} in network mk-addons-001438
	I0916 10:22:07.821469   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:07.821506   12265 main.go:141] libmachine: (addons-001438) Reserved static IP address: 192.168.39.72
	I0916 10:22:07.821523   12265 main.go:141] libmachine: (addons-001438) Waiting for SSH to be available...
	I0916 10:22:07.823797   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.824029   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438
	I0916 10:22:07.824057   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find defined IP address of network mk-addons-001438 interface with MAC address 52:54:00:9c:55:19
	I0916 10:22:07.824199   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:07.824226   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:07.824261   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:07.824273   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:07.824297   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:07.835394   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: exit status 255: 
	I0916 10:22:07.835415   12265 main.go:141] libmachine: (addons-001438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 10:22:07.835421   12265 main.go:141] libmachine: (addons-001438) DBG | command : exit 0
	I0916 10:22:07.835428   12265 main.go:141] libmachine: (addons-001438) DBG | err     : exit status 255
	I0916 10:22:07.835435   12265 main.go:141] libmachine: (addons-001438) DBG | output  : 
	I0916 10:22:10.838181   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:10.840410   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840805   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.840830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840953   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:10.840980   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:10.841012   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:10.841026   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:10.841039   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:10.969218   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: <nil>: 
	I0916 10:22:10.969498   12265 main.go:141] libmachine: (addons-001438) KVM machine creation complete!
	I0916 10:22:10.969791   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:10.970351   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970568   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970704   12265 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:22:10.970716   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:10.971844   12265 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:22:10.971857   12265 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:22:10.971863   12265 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:22:10.971871   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:10.973963   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974287   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.974322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974443   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:10.974600   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974766   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974897   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:10.975056   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:10.975258   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:10.975270   12265 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:22:11.084303   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.084322   12265 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:22:11.084329   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.086985   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087399   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.087449   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087637   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.087805   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.087957   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.088052   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.088212   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.088404   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.088420   12265 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:22:11.197622   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:22:11.197666   12265 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:22:11.197674   12265 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:22:11.197683   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.197922   12265 buildroot.go:166] provisioning hostname "addons-001438"
	I0916 10:22:11.197936   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.198131   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.200614   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.200955   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.200988   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.201100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.201269   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201396   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201536   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.201681   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.201878   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.201891   12265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-001438 && echo "addons-001438" | sudo tee /etc/hostname
	I0916 10:22:11.329393   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-001438
	
	I0916 10:22:11.329423   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.332085   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332370   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.332397   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332557   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.332746   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332868   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332999   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.333118   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.333336   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.333353   12265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-001438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-001438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-001438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:22:11.454462   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.454486   12265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:22:11.454539   12265 buildroot.go:174] setting up certificates
	I0916 10:22:11.454553   12265 provision.go:84] configureAuth start
	I0916 10:22:11.454562   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.454823   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:11.457458   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.457872   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.457902   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.458065   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.460166   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460456   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.460484   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460579   12265 provision.go:143] copyHostCerts
	I0916 10:22:11.460674   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:22:11.460835   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:22:11.460925   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:22:11.460997   12265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.addons-001438 san=[127.0.0.1 192.168.39.72 addons-001438 localhost minikube]
	I0916 10:22:11.639072   12265 provision.go:177] copyRemoteCerts
	I0916 10:22:11.639141   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:22:11.639169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.641767   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642050   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.642076   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642240   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.642415   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.642519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.642635   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:11.727509   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:22:11.752436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:22:11.776436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:22:11.799597   12265 provision.go:87] duration metric: took 345.032702ms to configureAuth
	I0916 10:22:11.799626   12265 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:22:11.799813   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:11.799904   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.802386   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.802700   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802854   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.803047   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803187   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803323   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.803504   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.803689   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.803704   12265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:22:12.030350   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:22:12.030374   12265 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:22:12.030382   12265 main.go:141] libmachine: (addons-001438) Calling .GetURL
	I0916 10:22:12.031607   12265 main.go:141] libmachine: (addons-001438) DBG | Using libvirt version 6000000
	I0916 10:22:12.034008   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034296   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.034325   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034451   12265 main.go:141] libmachine: Docker is up and running!
	I0916 10:22:12.034463   12265 main.go:141] libmachine: Reticulating splines...
	I0916 10:22:12.034470   12265 client.go:171] duration metric: took 28.959474569s to LocalClient.Create
	I0916 10:22:12.034491   12265 start.go:167] duration metric: took 28.959547297s to libmachine.API.Create "addons-001438"
	I0916 10:22:12.034500   12265 start.go:293] postStartSetup for "addons-001438" (driver="kvm2")
	I0916 10:22:12.034509   12265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:22:12.034535   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.034731   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:22:12.034762   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.036747   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037041   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.037068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037200   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.037344   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.037486   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.037623   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.123403   12265 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:22:12.127815   12265 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:22:12.127838   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:22:12.127904   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:22:12.127926   12265 start.go:296] duration metric: took 93.420957ms for postStartSetup
	I0916 10:22:12.127955   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:12.128519   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.131232   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131510   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.131547   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131776   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:22:12.131949   12265 start.go:128] duration metric: took 29.075237515s to createHost
	I0916 10:22:12.131975   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.133967   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134281   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.134305   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134418   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.134606   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134753   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134877   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.135036   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:12.135185   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:12.135202   12265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:22:12.245734   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482132.226578519
	
	I0916 10:22:12.245757   12265 fix.go:216] guest clock: 1726482132.226578519
	I0916 10:22:12.245764   12265 fix.go:229] Guest: 2024-09-16 10:22:12.226578519 +0000 UTC Remote: 2024-09-16 10:22:12.131960304 +0000 UTC m=+29.174301435 (delta=94.618215ms)
	I0916 10:22:12.245784   12265 fix.go:200] guest clock delta is within tolerance: 94.618215ms
	I0916 10:22:12.245790   12265 start.go:83] releasing machines lock for "addons-001438", held for 29.189143417s
	I0916 10:22:12.245809   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.246014   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.248419   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248678   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.248704   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248832   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249314   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249485   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249586   12265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:22:12.249653   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.249707   12265 ssh_runner.go:195] Run: cat /version.json
	I0916 10:22:12.249728   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.252249   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252497   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252634   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252657   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252757   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.252904   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252922   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.252925   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.253038   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.253093   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253241   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.253258   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.253386   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253515   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.362639   12265 ssh_runner.go:195] Run: systemctl --version
	I0916 10:22:12.368512   12265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:22:12.527002   12265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:22:12.532733   12265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:22:12.532791   12265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:22:12.548743   12265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:22:12.548773   12265 start.go:495] detecting cgroup driver to use...
	I0916 10:22:12.548843   12265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:22:12.564219   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:22:12.578224   12265 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:22:12.578276   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:22:12.591434   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:22:12.604674   12265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:22:12.712713   12265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:22:12.868881   12265 docker.go:233] disabling docker service ...
	I0916 10:22:12.868945   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:22:12.883262   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:22:12.896034   12265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:22:13.009183   12265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:22:13.123591   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:22:13.137411   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:22:13.155768   12265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:22:13.155832   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.166378   12265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:22:13.166436   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.177199   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.187753   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.198460   12265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:22:13.209356   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.220222   12265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.237721   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.247992   12265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:22:13.257214   12265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:22:13.257274   12265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:22:13.269843   12265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:22:13.279361   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:13.392424   12265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:22:13.489919   12265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:22:13.490002   12265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:22:13.495269   12265 start.go:563] Will wait 60s for crictl version
	I0916 10:22:13.495342   12265 ssh_runner.go:195] Run: which crictl
	I0916 10:22:13.499375   12265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:22:13.543037   12265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:22:13.543161   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.571422   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.600892   12265 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:22:13.602164   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:13.604725   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605053   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:13.605090   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605239   12265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:22:13.609153   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:13.621451   12265 kubeadm.go:883] updating cluster {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:22:13.621560   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:13.621616   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:13.653616   12265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:22:13.653695   12265 ssh_runner.go:195] Run: which lz4
	I0916 10:22:13.657722   12265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:22:13.661843   12265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:22:13.661873   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:22:14.968986   12265 crio.go:462] duration metric: took 1.311298771s to copy over tarball
	I0916 10:22:14.969053   12265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:22:17.073836   12265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104757919s)
	I0916 10:22:17.073872   12265 crio.go:469] duration metric: took 2.104858266s to extract the tarball
	I0916 10:22:17.073881   12265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:22:17.110316   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:17.150207   12265 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:22:17.150233   12265 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:22:17.150241   12265 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.1 crio true true} ...
	I0916 10:22:17.150343   12265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-001438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:22:17.150424   12265 ssh_runner.go:195] Run: crio config
	I0916 10:22:17.195725   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:17.195746   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:17.195756   12265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:22:17.195774   12265 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-001438 NodeName:addons-001438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:22:17.195915   12265 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-001438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:22:17.195969   12265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:22:17.206079   12265 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:22:17.206139   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:22:17.215719   12265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 10:22:17.232125   12265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:22:17.248126   12265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:22:17.264165   12265 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0916 10:22:17.267727   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:17.279787   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:17.393283   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:17.410756   12265 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438 for IP: 192.168.39.72
	I0916 10:22:17.410774   12265 certs.go:194] generating shared ca certs ...
	I0916 10:22:17.410794   12265 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.410949   12265 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:22:17.480758   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt ...
	I0916 10:22:17.480787   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt: {Name:mkc291c3a986acc7f4de9183c4ef6d249d8de5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.480965   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key ...
	I0916 10:22:17.480980   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key: {Name:mk56bc8b146d891ba5f741ad0bd339fffdb85989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.481075   12265 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:22:17.673219   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt ...
	I0916 10:22:17.673250   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt: {Name:mk8d6878492eab0d99f630fc495324e3b843781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673403   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key ...
	I0916 10:22:17.673414   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key: {Name:mk082b50320d253da8f01ad2454b69492e000fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673482   12265 certs.go:256] generating profile certs ...
	I0916 10:22:17.673531   12265 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key
	I0916 10:22:17.673544   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt with IP's: []
	I0916 10:22:17.921779   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt ...
	I0916 10:22:17.921811   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: {Name:mk9172b9e8f20da0dd399e583d4f0391784c25bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.921970   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key ...
	I0916 10:22:17.921981   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key: {Name:mk65d84f1710f9ab616402324cb2a91f749aa3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.922048   12265 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03
	I0916 10:22:17.922066   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0916 10:22:17.984449   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 ...
	I0916 10:22:17.984473   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03: {Name:mk697c0092db030ad4df50333f6d1db035d298e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984627   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 ...
	I0916 10:22:17.984638   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03: {Name:mkf74035add612ea1883fde9b662a919a8d7c5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984705   12265 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt
	I0916 10:22:17.984774   12265 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key
	I0916 10:22:17.984818   12265 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key
	I0916 10:22:17.984834   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt with IP's: []
	I0916 10:22:18.105094   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt ...
	I0916 10:22:18.105122   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt: {Name:mk12379583893d02aa599284bf7c2e673e4a585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105290   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key ...
	I0916 10:22:18.105300   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key: {Name:mkddc10c89aa36609a41c940a83606fa36ac69df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105453   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:22:18.105484   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:22:18.105509   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:22:18.105531   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:22:18.106125   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:22:18.132592   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:22:18.173674   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:22:18.200455   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:22:18.223366   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:22:18.246242   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:22:18.269411   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:22:18.292157   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:22:18.314508   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:22:18.337365   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:22:18.353286   12265 ssh_runner.go:195] Run: openssl version
	I0916 10:22:18.358942   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:22:18.369103   12265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373299   12265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373346   12265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.378948   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:22:18.389436   12265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:22:18.393342   12265 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:22:18.393387   12265 kubeadm.go:392] StartCluster: {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:18.393452   12265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:22:18.393509   12265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:22:18.429056   12265 cri.go:89] found id: ""
	I0916 10:22:18.429118   12265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:22:18.439123   12265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:22:18.448797   12265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:22:18.458281   12265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:22:18.458303   12265 kubeadm.go:157] found existing configuration files:
	
	I0916 10:22:18.458357   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:22:18.467304   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:22:18.467373   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:22:18.476476   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:22:18.485402   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:22:18.485467   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:22:18.494643   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.503578   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:22:18.503657   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.512633   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:22:18.521391   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:22:18.521454   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:22:18.530381   12265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:22:18.584992   12265 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:22:18.585058   12265 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:22:18.700906   12265 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:22:18.701050   12265 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:22:18.701195   12265 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:22:18.712665   12265 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:22:18.808124   12265 out.go:235]   - Generating certificates and keys ...
	I0916 10:22:18.808238   12265 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:22:18.808308   12265 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:22:18.808390   12265 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:22:18.884612   12265 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:22:19.103481   12265 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:22:19.230175   12265 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:22:19.422850   12265 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:22:19.423077   12265 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.499430   12265 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:22:19.499746   12265 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.689533   12265 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:22:19.770560   12265 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:22:20.159783   12265 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:22:20.160053   12265 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:22:20.575897   12265 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:22:20.728566   12265 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:22:21.092038   12265 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:22:21.382957   12265 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:22:21.446452   12265 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:22:21.447068   12265 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:22:21.451577   12265 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:22:21.454426   12265 out.go:235]   - Booting up control plane ...
	I0916 10:22:21.454540   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:22:21.454614   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:22:21.454722   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:22:21.468531   12265 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:22:21.475700   12265 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:22:21.475767   12265 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:22:21.606009   12265 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:22:21.606143   12265 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:22:22.124369   12265 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.881759ms
	I0916 10:22:22.124492   12265 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:22:27.123389   12265 kubeadm.go:310] [api-check] The API server is healthy after 5.002163965s
	I0916 10:22:27.138636   12265 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:22:27.154171   12265 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:22:27.185604   12265 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:22:27.185839   12265 kubeadm.go:310] [mark-control-plane] Marking the node addons-001438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:22:27.198602   12265 kubeadm.go:310] [bootstrap-token] Using token: os1o8m.q16efzg2rjnkpln8
	I0916 10:22:27.199966   12265 out.go:235]   - Configuring RBAC rules ...
	I0916 10:22:27.200085   12265 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:22:27.209733   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:22:27.218630   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:22:27.222473   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:22:27.226151   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:22:27.230516   12265 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:22:27.529586   12265 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:22:27.967178   12265 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:22:28.529936   12265 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:22:28.529960   12265 kubeadm.go:310] 
	I0916 10:22:28.530028   12265 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:22:28.530044   12265 kubeadm.go:310] 
	I0916 10:22:28.530137   12265 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:22:28.530173   12265 kubeadm.go:310] 
	I0916 10:22:28.530227   12265 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:22:28.530307   12265 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:22:28.530390   12265 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:22:28.530397   12265 kubeadm.go:310] 
	I0916 10:22:28.530463   12265 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:22:28.530472   12265 kubeadm.go:310] 
	I0916 10:22:28.530525   12265 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:22:28.530537   12265 kubeadm.go:310] 
	I0916 10:22:28.530609   12265 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:22:28.530728   12265 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:22:28.530832   12265 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:22:28.530868   12265 kubeadm.go:310] 
	I0916 10:22:28.530981   12265 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:22:28.531080   12265 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:22:28.531091   12265 kubeadm.go:310] 
	I0916 10:22:28.531215   12265 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531358   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:22:28.531389   12265 kubeadm.go:310] 	--control-plane 
	I0916 10:22:28.531397   12265 kubeadm.go:310] 
	I0916 10:22:28.531518   12265 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:22:28.531528   12265 kubeadm.go:310] 
	I0916 10:22:28.531639   12265 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531783   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:22:28.532220   12265 kubeadm.go:310] W0916 10:22:18.568727     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532498   12265 kubeadm.go:310] W0916 10:22:18.569597     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532623   12265 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:22:28.532635   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.532642   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:28.534239   12265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:22:28.535682   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:22:28.547306   12265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:22:28.567029   12265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:22:28.567083   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:28.567116   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-001438 minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-001438 minikube.k8s.io/primary=true
	I0916 10:22:28.599898   12265 ops.go:34] apiserver oom_adj: -16
	I0916 10:22:28.718193   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.219097   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.718331   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.219213   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.718728   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.218997   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.719218   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.218543   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.335651   12265 kubeadm.go:1113] duration metric: took 3.768632423s to wait for elevateKubeSystemPrivileges
	I0916 10:22:32.335685   12265 kubeadm.go:394] duration metric: took 13.942299744s to StartCluster
	I0916 10:22:32.335709   12265 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.335851   12265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:22:32.336274   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.336491   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:22:32.336522   12265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:22:32.336653   12265 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:22:32.336724   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.336769   12265 addons.go:69] Setting default-storageclass=true in profile "addons-001438"
	I0916 10:22:32.336779   12265 addons.go:69] Setting ingress-dns=true in profile "addons-001438"
	I0916 10:22:32.336787   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-001438"
	I0916 10:22:32.336780   12265 addons.go:69] Setting ingress=true in profile "addons-001438"
	I0916 10:22:32.336793   12265 addons.go:69] Setting cloud-spanner=true in profile "addons-001438"
	I0916 10:22:32.336813   12265 addons.go:69] Setting inspektor-gadget=true in profile "addons-001438"
	I0916 10:22:32.336820   12265 addons.go:69] Setting gcp-auth=true in profile "addons-001438"
	I0916 10:22:32.336832   12265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-001438"
	I0916 10:22:32.336835   12265 addons.go:234] Setting addon cloud-spanner=true in "addons-001438"
	I0916 10:22:32.336828   12265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-001438"
	I0916 10:22:32.336844   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-001438"
	I0916 10:22:32.336825   12265 addons.go:234] Setting addon inspektor-gadget=true in "addons-001438"
	I0916 10:22:32.336853   12265 addons.go:69] Setting registry=true in profile "addons-001438"
	I0916 10:22:32.336867   12265 addons.go:234] Setting addon registry=true in "addons-001438"
	I0916 10:22:32.336883   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336888   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336896   12265 addons.go:69] Setting helm-tiller=true in profile "addons-001438"
	I0916 10:22:32.336908   12265 addons.go:234] Setting addon helm-tiller=true in "addons-001438"
	I0916 10:22:32.336937   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336940   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336844   12265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-001438"
	I0916 10:22:32.337250   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337262   12265 addons.go:69] Setting volcano=true in profile "addons-001438"
	I0916 10:22:32.337273   12265 addons.go:234] Setting addon volcano=true in "addons-001438"
	I0916 10:22:32.337281   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337295   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337315   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336808   12265 addons.go:234] Setting addon ingress=true in "addons-001438"
	I0916 10:22:32.337347   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337348   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337365   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337367   12265 addons.go:69] Setting volumesnapshots=true in profile "addons-001438"
	I0916 10:22:32.337379   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337381   12265 addons.go:234] Setting addon volumesnapshots=true in "addons-001438"
	I0916 10:22:32.337412   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336888   12265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:32.337442   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336769   12265 addons.go:69] Setting yakd=true in profile "addons-001438"
	I0916 10:22:32.337489   12265 addons.go:234] Setting addon yakd=true in "addons-001438"
	I0916 10:22:32.337633   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337660   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336835   12265 addons.go:69] Setting metrics-server=true in profile "addons-001438"
	I0916 10:22:32.337353   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337714   12265 addons.go:234] Setting addon metrics-server=true in "addons-001438"
	I0916 10:22:32.337741   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337700   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337795   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336844   12265 mustload.go:65] Loading cluster: addons-001438
	I0916 10:22:32.336824   12265 addons.go:69] Setting storage-provisioner=true in profile "addons-001438"
	I0916 10:22:32.337840   12265 addons.go:234] Setting addon storage-provisioner=true in "addons-001438"
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337881   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336804   12265 addons.go:234] Setting addon ingress-dns=true in "addons-001438"
	I0916 10:22:32.337251   12265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-001438"
	I0916 10:22:32.337944   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338072   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338099   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338127   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338301   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338331   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338413   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338421   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338448   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338455   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338446   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338765   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338792   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338818   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338845   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338995   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.339304   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.339363   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.342405   12265 out.go:177] * Verifying Kubernetes components...
	I0916 10:22:32.343665   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:32.357106   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0916 10:22:32.357444   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0916 10:22:32.357655   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0916 10:22:32.357797   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.357897   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358211   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358403   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358419   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358562   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358574   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358633   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0916 10:22:32.358790   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.358949   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358960   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.359007   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0916 10:22:32.369699   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.369748   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.369818   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370020   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370060   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370069   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370101   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370194   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370269   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370379   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.370390   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.370789   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370827   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370908   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370969   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.371094   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.371111   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.371475   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371508   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371573   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.371638   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371663   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371731   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.386697   12265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-001438"
	I0916 10:22:32.386747   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.386763   12265 addons.go:234] Setting addon default-storageclass=true in "addons-001438"
	I0916 10:22:32.386810   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.387114   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387173   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.387252   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387291   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.408433   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0916 10:22:32.409200   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.409836   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.409856   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.410249   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.410830   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.410872   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.411145   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0916 10:22:32.411578   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.413298   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.413319   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.414168   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0916 10:22:32.414190   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0916 10:22:32.414292   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0916 10:22:32.414570   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.414671   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.415178   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.415195   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.415681   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.416214   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.416252   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.416442   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0916 10:22:32.416592   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417197   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.417231   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.417415   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0916 10:22:32.417454   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417595   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.417608   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.417843   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417917   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418037   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.418050   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.418410   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.418443   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.418409   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418501   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.419031   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.419065   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.419266   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419281   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419404   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419414   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419702   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.419847   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.420545   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.421091   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.421133   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.421574   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.421979   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0916 10:22:32.422963   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.423382   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.423399   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.423697   12265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:22:32.423813   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.424320   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.424354   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.425846   12265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:22:32.425941   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 10:22:32.426062   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0916 10:22:32.426213   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0916 10:22:32.426381   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426757   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426931   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.426942   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.426976   12265 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:22:32.426992   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:22:32.427011   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.427391   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.427470   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.427489   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.427946   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.428354   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428385   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.428598   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.428889   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428924   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.429188   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.429202   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.429517   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.431934   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0916 10:22:32.431987   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432541   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.432563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432751   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.432883   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.432998   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.433120   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.433712   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.435531   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.435730   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435742   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.435888   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.435899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:32.435907   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435913   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.436070   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.436085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 10:22:32.436166   12265 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:22:32.440699   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0916 10:22:32.441072   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.441617   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.441644   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.441979   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.442497   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.442531   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.450769   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0916 10:22:32.451259   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.451700   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.451718   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.452549   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.453092   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.453146   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.454430   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0916 10:22:32.454743   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455061   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455149   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0916 10:22:32.455842   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455847   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455860   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455871   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455922   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.456243   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456542   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456622   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.456639   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.456747   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.457901   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0916 10:22:32.458037   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.458209   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.458254   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.458704   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.458721   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.459089   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.459296   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.459533   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.460121   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.460511   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.460545   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.460978   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0916 10:22:32.461180   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.461244   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.461735   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.461753   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.461805   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.462195   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0916 10:22:32.462331   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.462809   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.464034   12265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:22:32.464150   12265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:22:32.464278   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.464668   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.464696   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.465237   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.466010   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.465566   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0916 10:22:32.466246   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:22:32.466259   12265 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:22:32.466276   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467014   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.467145   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.467235   12265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:22:32.467359   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:22:32.467370   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:22:32.467385   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467696   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.467711   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.468100   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468326   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.468710   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:22:32.468725   12265 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:22:32.468742   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.468966   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:22:32.469146   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.469463   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.469917   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.469918   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.470000   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.470971   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0916 10:22:32.471473   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.471695   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.472001   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.472015   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.472269   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:22:32.472471   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472523   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0916 10:22:32.472664   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472783   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.472993   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.473106   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.473134   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.473329   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.473377   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.473597   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.473743   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.473790   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.473851   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.474147   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:32.474163   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:22:32.474178   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.474793   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.474941   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.474955   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.475234   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.475510   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.475619   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475650   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.475665   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475824   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.476100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.476264   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.476604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.476644   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.476828   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.476940   12265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:22:32.477612   12265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:22:32.478260   12265 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.478276   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:22:32.478291   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.478585   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.478604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.478880   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.479035   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.479168   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.479299   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.479921   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.479937   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:22:32.479951   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.480259   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.480742   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.481958   12265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:22:32.482834   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0916 10:22:32.482998   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483118   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483310   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.483473   12265 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:22:32.483494   12265 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:22:32.483512   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.483802   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.483828   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.483888   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483903   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483899   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483930   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.484092   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.484159   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484194   12265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:22:32.484411   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.484581   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.484636   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484681   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.484892   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.484958   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.485096   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.485218   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.485248   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.485262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.485372   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.485494   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.485505   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:22:32.485519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.485781   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.486028   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.486181   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.486318   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.487186   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487422   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.487675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.487695   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487742   12265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.487752   12265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:22:32.487764   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.487810   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.487995   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.488225   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.488378   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.489702   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490168   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.490188   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490394   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.490571   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.490713   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.490823   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.492068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492458   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.492479   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492686   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.492906   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.492915   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0916 10:22:32.493044   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.493239   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.493450   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.493933   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.493950   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.494562   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.494891   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.496932   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.498147   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0916 10:22:32.498828   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:22:32.499232   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.499608   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.499634   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.499936   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.500124   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.500215   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:22:32.500241   12265 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:22:32.500262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.501763   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.503323   12265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:22:32.503738   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504260   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.504287   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504422   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.504578   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.504721   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.504800   12265 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:32.504813   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:22:32.504828   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.504844   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.507073   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0916 10:22:32.507489   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.507971   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.507994   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.508014   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 10:22:32.508383   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.508455   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0916 10:22:32.508996   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.509012   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509054   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509082   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509517   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.509552   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.509551   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.509573   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509882   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510086   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.510151   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.510169   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.510570   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.510576   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510696   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.510739   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.510822   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.510947   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.511685   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.511711   12265 retry.go:31] will retry after 323.390168ms: ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.513110   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.513548   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.515216   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:22:32.516467   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:22:32.517228   12265 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:22:32.518463   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:22:32.519691   12265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:22:32.521193   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:22:32.521287   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:32.521309   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:22:32.521330   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.523957   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:22:32.524563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.524915   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.524939   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.525078   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.525271   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.525408   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.525548   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.526174   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526199   12265 retry.go:31] will retry after 208.869548ms: ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526327   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:22:32.527568   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:22:32.528811   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:22:32.530140   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:22:32.530154   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:22:32.530169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.533281   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533666   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.533688   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533886   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.534072   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.534227   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.534367   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.700911   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:32.700984   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:22:32.785482   12265 node_ready.go:35] waiting up to 6m0s for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822842   12265 node_ready.go:49] node "addons-001438" has status "Ready":"True"
	I0916 10:22:32.822881   12265 node_ready.go:38] duration metric: took 37.361645ms for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822895   12265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:32.861506   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:22:32.861543   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:22:32.862634   12265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:32.929832   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.943014   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.952437   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.991347   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.995067   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:22:32.995096   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:22:33.036627   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:22:33.036657   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:22:33.036890   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:33.060821   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:22:33.060843   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:22:33.069120   12265 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:22:33.069156   12265 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:22:33.070018   12265 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:22:33.070038   12265 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:22:33.073512   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:22:33.073535   12265 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:22:33.137058   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:22:33.137088   12265 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:22:33.226855   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.226884   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:22:33.270492   12265 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:22:33.270513   12265 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:22:33.316169   12265 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.316195   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:22:33.316355   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:22:33.316373   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:22:33.316509   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:22:33.316522   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:22:33.327110   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:22:33.327126   12265 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:22:33.354597   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.420390   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:33.435680   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:22:33.435717   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:22:33.439954   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:22:33.439978   12265 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:22:33.444981   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.445002   12265 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:22:33.522524   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:33.536060   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:22:33.536089   12265 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:22:33.569830   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.590335   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:22:33.590366   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:22:33.601121   12265 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:22:33.601154   12265 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:22:33.623197   12265 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.623219   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:22:33.629904   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.693404   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.693424   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:22:33.747802   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.761431   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:22:33.761461   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:22:33.774811   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:22:33.774845   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:22:33.825893   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.895859   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:22:33.895893   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:22:34.018321   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:22:34.018345   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:22:34.260751   12265 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:22:34.260776   12265 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:22:34.288705   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:22:34.288733   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:22:34.575904   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:22:34.575932   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:22:34.578707   12265 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:34.578728   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:22:34.872174   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:35.002110   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:22:35.002133   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:22:35.053333   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47211504s)
	I0916 10:22:35.173178   12265 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.243289168s)
	I0916 10:22:35.173338   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173355   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.173706   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:35.173723   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.173737   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.173751   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173762   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.174037   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.174053   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.219712   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.219745   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.220033   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.220084   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.326225   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:22:35.326245   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:22:35.667079   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:35.667102   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:22:35.677467   12265 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-001438" context rescaled to 1 replicas
	I0916 10:22:36.005922   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:36.880549   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:37.248962   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.296492058s)
	I0916 10:22:37.249022   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249036   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.306004364s)
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.257675255s)
	I0916 10:22:37.249138   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249160   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249084   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249221   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249330   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249355   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249374   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249434   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249460   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249476   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249440   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249499   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249529   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249541   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249485   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249593   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249655   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249676   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251028   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251216   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251214   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251232   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251278   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251288   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:38.978538   12265 pod_ready.go:93] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:38.978561   12265 pod_ready.go:82] duration metric: took 6.115904528s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:38.978572   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179661   12265 pod_ready.go:93] pod "kube-apiserver-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.179691   12265 pod_ready.go:82] duration metric: took 201.112317ms for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179705   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377607   12265 pod_ready.go:93] pod "kube-controller-manager-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.377640   12265 pod_ready.go:82] duration metric: took 197.926831ms for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377656   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509747   12265 pod_ready.go:93] pod "kube-proxy-66flj" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.509775   12265 pod_ready.go:82] duration metric: took 132.110984ms for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509789   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633441   12265 pod_ready.go:93] pod "kube-scheduler-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.633475   12265 pod_ready.go:82] duration metric: took 123.676997ms for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633487   12265 pod_ready.go:39] duration metric: took 6.810577473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:39.633508   12265 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:22:39.633572   12265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:22:39.633966   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:22:39.634003   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:39.637511   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638022   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:39.638050   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638265   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:39.638449   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:39.638594   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:39.638741   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:40.248183   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:22:40.342621   12265 addons.go:234] Setting addon gcp-auth=true in "addons-001438"
	I0916 10:22:40.342682   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:40.343054   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.343105   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.358807   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0916 10:22:40.359276   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.359793   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.359818   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.360152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.360750   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.360794   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.375531   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0916 10:22:40.375999   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.376410   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.376431   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.376712   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.376880   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:40.378466   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:40.378706   12265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:22:40.378736   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:40.381488   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.381978   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:40.381997   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.382162   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:40.382374   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:40.382527   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:40.382728   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:41.185716   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.148787276s)
	I0916 10:22:41.185775   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185787   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185792   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.831162948s)
	I0916 10:22:41.185821   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185842   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185899   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.76548291s)
	I0916 10:22:41.185927   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185929   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.663383888s)
	I0916 10:22:41.185940   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185948   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185957   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186031   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616165984s)
	I0916 10:22:41.186072   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186084   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186162   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.55623363s)
	I0916 10:22:41.186179   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186188   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186223   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186233   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186246   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186249   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186272   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186279   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186321   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.438489786s)
	W0916 10:22:41.186349   12265 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186370   12265 retry.go:31] will retry after 282.502814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186323   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186452   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.360528333s)
	I0916 10:22:41.186474   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186483   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186530   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186552   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186580   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186592   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.133220852s)
	I0916 10:22:41.186602   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186608   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186609   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186627   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186684   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186691   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186698   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186704   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186797   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186826   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186833   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186851   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186871   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186884   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186893   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186901   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186907   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186936   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186943   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186990   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186999   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187006   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187013   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.187860   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.187892   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.187899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187912   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.188173   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.188191   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188200   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188204   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188209   12265 addons.go:475] Verifying addon metrics-server=true in "addons-001438"
	I0916 10:22:41.188211   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188241   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188250   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188259   12265 addons.go:475] Verifying addon ingress=true in "addons-001438"
	I0916 10:22:41.190004   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190036   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190042   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190099   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190137   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190141   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190152   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190155   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190159   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.190162   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190167   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.190170   12265 addons.go:475] Verifying addon registry=true in "addons-001438"
	I0916 10:22:41.190534   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190570   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190579   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.191944   12265 out.go:177] * Verifying registry addon...
	I0916 10:22:41.191953   12265 out.go:177] * Verifying ingress addon...
	I0916 10:22:41.192858   12265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-001438 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:22:41.245022   12265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:22:41.245042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:41.245048   12265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:22:41.245062   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.270906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.270924   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.271190   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.271210   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.469044   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:41.699366   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.699576   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.200823   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.201220   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.707853   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.708238   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.062276   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.056308906s)
	I0916 10:22:43.062328   12265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.428733709s)
	I0916 10:22:43.062359   12265 api_server.go:72] duration metric: took 10.72580389s to wait for apiserver process to appear ...
	I0916 10:22:43.062372   12265 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:22:43.062397   12265 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0916 10:22:43.062411   12265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.683683571s)
	I0916 10:22:43.062334   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062455   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.062799   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:43.062819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.062830   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.062838   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062846   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.063060   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.063085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.063094   12265 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:43.064955   12265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:22:43.065015   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:43.066605   12265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:22:43.067509   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:22:43.067847   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:22:43.067859   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:22:43.093271   12265 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0916 10:22:43.093820   12265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:22:43.093839   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.095011   12265 api_server.go:141] control plane version: v1.31.1
	I0916 10:22:43.095033   12265 api_server.go:131] duration metric: took 32.653755ms to wait for apiserver health ...
	I0916 10:22:43.095043   12265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:22:43.123828   12265 system_pods.go:59] 19 kube-system pods found
	I0916 10:22:43.123858   12265 system_pods.go:61] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.123864   12265 system_pods.go:61] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.123871   12265 system_pods.go:61] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.123876   12265 system_pods.go:61] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.123883   12265 system_pods.go:61] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.123886   12265 system_pods.go:61] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.123903   12265 system_pods.go:61] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.123906   12265 system_pods.go:61] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.123913   12265 system_pods.go:61] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.123917   12265 system_pods.go:61] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.123923   12265 system_pods.go:61] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.123928   12265 system_pods.go:61] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.123935   12265 system_pods.go:61] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.123943   12265 system_pods.go:61] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.123948   12265 system_pods.go:61] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.123955   12265 system_pods.go:61] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123960   12265 system_pods.go:61] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123967   12265 system_pods.go:61] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.123972   12265 system_pods.go:61] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.123980   12265 system_pods.go:74] duration metric: took 28.931422ms to wait for pod list to return data ...
	I0916 10:22:43.123988   12265 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:22:43.137057   12265 default_sa.go:45] found service account: "default"
	I0916 10:22:43.137084   12265 default_sa.go:55] duration metric: took 13.088793ms for default service account to be created ...
	I0916 10:22:43.137095   12265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:22:43.166020   12265 system_pods.go:86] 19 kube-system pods found
	I0916 10:22:43.166054   12265 system_pods.go:89] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.166063   12265 system_pods.go:89] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.166075   12265 system_pods.go:89] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.166088   12265 system_pods.go:89] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.166100   12265 system_pods.go:89] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.166108   12265 system_pods.go:89] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.166118   12265 system_pods.go:89] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.166126   12265 system_pods.go:89] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.166136   12265 system_pods.go:89] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.166145   12265 system_pods.go:89] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.166154   12265 system_pods.go:89] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.166164   12265 system_pods.go:89] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.166171   12265 system_pods.go:89] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.166178   12265 system_pods.go:89] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.166183   12265 system_pods.go:89] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.166199   12265 system_pods.go:89] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166207   12265 system_pods.go:89] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166217   12265 system_pods.go:89] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.166224   12265 system_pods.go:89] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.166231   12265 system_pods.go:126] duration metric: took 29.130167ms to wait for k8s-apps to be running ...
	I0916 10:22:43.166241   12265 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:22:43.166284   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:22:43.202957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.204822   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:43.205240   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:22:43.205259   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:22:43.339484   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.339511   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:22:43.533725   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.574829   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.701096   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.702516   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.074326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.199962   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.201086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:44.420432   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.951340242s)
	I0916 10:22:44.420484   12265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.25416987s)
	I0916 10:22:44.420496   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.420512   12265 system_svc.go:56] duration metric: took 1.254267923s WaitForService to wait for kubelet
	I0916 10:22:44.420530   12265 kubeadm.go:582] duration metric: took 12.083973387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:22:44.420555   12265 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:22:44.420516   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.420960   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.420998   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421011   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.421019   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.421041   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.421242   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.421289   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421306   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.432407   12265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:22:44.432433   12265 node_conditions.go:123] node cpu capacity is 2
	I0916 10:22:44.432443   12265 node_conditions.go:105] duration metric: took 11.883273ms to run NodePressure ...
	I0916 10:22:44.432454   12265 start.go:241] waiting for startup goroutines ...
	I0916 10:22:44.573423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.701968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.702167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.087788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.175284   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.64151941s)
	I0916 10:22:45.175340   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175356   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175638   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175658   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175667   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175675   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175907   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175959   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175966   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:45.176874   12265 addons.go:475] Verifying addon gcp-auth=true in "addons-001438"
	I0916 10:22:45.179151   12265 out.go:177] * Verifying gcp-auth addon...
	I0916 10:22:45.181042   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:22:45.204765   12265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:22:45.204788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.240576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.244884   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.572763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.684678   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.699294   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.700332   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.071926   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.184345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.198555   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.198584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.572691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.686213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.698404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.699290   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.073014   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.184892   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.199176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.199412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.573319   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.685117   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.698854   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.699042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.080702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.186042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.198652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:48.198985   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.572136   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.684922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.698643   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.698805   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.072263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.186126   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.198845   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.201291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.571909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.686134   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.699485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.699837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.072013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.185475   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.198803   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:50.198988   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.572410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.684716   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.698643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.698842   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.072735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.185327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.198402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.198563   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.574099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.684301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.698582   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.699135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.073280   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.184410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.197628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.197951   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.573111   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.685463   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.698350   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.698445   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.073318   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.185032   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.198371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.198982   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.572652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.684593   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.698434   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.699099   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.071466   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.184978   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.199125   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:54.199475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.684904   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.699578   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.700868   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.072026   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.186696   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.199421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.200454   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:55.811368   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.811883   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.811882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.812044   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.073000   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.197552   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.571945   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.684725   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.698164   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.698871   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.078099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.187093   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.198266   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.198788   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.572608   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.685182   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.698112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.698451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.072438   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.184226   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.197871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:58.199176   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.573655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.688012   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.698890   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.699498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.072908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.197825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.198094   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:59.572578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.685886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.699165   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.699539   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.072677   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.185334   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.198436   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.572620   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.684676   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.698184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.698937   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.368315   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.368647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:01.368662   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.369057   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.577610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.685792   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.699073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.700679   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.073297   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.184780   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.198423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.198632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.573860   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.688317   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.699137   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.699189   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.073268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.185286   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.197706   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:03.199446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.575016   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.688681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.697852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.699284   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.072561   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.184550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.198183   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.198692   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.573058   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.684410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.698448   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.699101   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.073082   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.198422   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.199510   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.572901   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.685013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.698419   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.699052   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.072680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.184899   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.199400   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.199960   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.573550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.698176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.386744   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.389015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:07.389529   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.391740   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.572440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.685517   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.699276   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.699495   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.073598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.185305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.198307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.198701   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.572936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.685042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.697898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.699045   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.073524   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.185170   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.197444   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.198282   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:09.571947   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.685269   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.700263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.700289   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.072367   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.184140   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.198279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.198501   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.571995   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.684443   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.698621   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.699212   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.072647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.184997   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.198336   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.199743   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.572138   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.684642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.697735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.698012   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.072087   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.184730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.198825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.199117   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.574471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.697610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.697875   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.074276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.200283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:13.200511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.572643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.687229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.700375   12265 kapi.go:107] duration metric: took 32.506622173s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:13.700476   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.073345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.185359   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.197920   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.714386   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.714848   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.072480   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.184006   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.198907   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.571536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.686990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.698314   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.072850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.397705   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.398059   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.571699   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.687893   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.701822   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.073078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.185433   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.202339   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.572915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.684909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.698215   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.071875   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.185548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.198104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.572180   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.684990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.698912   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.072105   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.184341   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.197977   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.571740   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.685205   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.698214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.071811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.184927   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.198225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.572184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.684471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.697550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.072526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.185439   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.198086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.573843   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.684530   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.699027   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.071583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.185751   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.201330   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.574078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.688728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.700516   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.072848   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.184719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.571975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.697845   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.071885   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.199755   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.209742   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.572903   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.684095   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.697255   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.072405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.185096   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.197451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.572250   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.685603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.699421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.072277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.197948   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.572954   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.684305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.698018   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.072121   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.186632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.198260   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.571710   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.685260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.697569   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.072712   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.185404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.197839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.572506   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.685719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.698390   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.073440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.185211   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.198135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.572871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.684795   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.698442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.074307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.184391   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.198195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.571684   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.686595   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.697829   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.072882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.184355   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.197913   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.572796   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.685340   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.697838   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.072358   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.185072   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.198841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.572260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.685619   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.697923   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.072329   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.184923   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.198461   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.572531   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.684886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.698221   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.071922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.184896   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.198347   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.572508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.685674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.698172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.072040   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.184401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.198192   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.571685   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.684934   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.699442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.072917   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.184575   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.197989   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.572782   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.685224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.697515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.073347   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.184633   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.198109   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.572239   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.684842   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.698412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.072639   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.184377   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.197723   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.572964   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.684944   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.698216   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.071865   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.184322   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.197583   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.572728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.697663   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.073346   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.184763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.198338   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.572748   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.688546   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.698337   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.072528   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.184742   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.197991   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.572832   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.685275   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.697957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.072948   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.185237   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.198222   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.572150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.685770   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.698107   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.072508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.198122   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.571791   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.685476   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.698021   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.072455   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.198450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.685519   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.698088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.073394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.184852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.198932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.572905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.685024   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.699000   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.072804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.185568   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.198040   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.571961   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.684879   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.698104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.071779   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.184794   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.198431   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.572786   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.685048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.701841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.072550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.184915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.198725   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.572850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.684405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.697953   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.075719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.185584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.198034   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.572642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.685074   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.697421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.072216   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.184736   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.198614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.572675   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.685508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.697632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.072878   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.185267   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.197508   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.684680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.698038   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.072225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.184256   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.197802   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.685760   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.699050   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.072698   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.185139   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.197417   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.572526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.684976   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.698186   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.071987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.184373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.197898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.573326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.685154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.699596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.071975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.184301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.197532   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.573068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.684535   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.698262   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.071830   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.185558   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.198149   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.684135   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.697614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.109030   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.216004   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.216775   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.572732   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.684811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.697899   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.071691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.198291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.572185   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.685478   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.698240   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.072727   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.185578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.207485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.684402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.698565   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.072447   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.192764   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.206954   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.573224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.685091   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.697997   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.071906   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.184428   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.197550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.572498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.685525   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.702647   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.072504   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.185219   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.197512   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.573858   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.685938   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.699556   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.080160   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.188056   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.197615   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.575213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.685187   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.697887   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.072585   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.185321   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.577876   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.685259   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.698763   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.073356   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.184332   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.197676   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.574632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.705119   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.705797   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.073702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.190460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.199492   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.573521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.685468   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.697671   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.074427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.211989   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.214167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.573479   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.684919   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.698441   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.184827   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.573401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.685277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.698457   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.072421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.184959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.198365   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.572446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.685036   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.697443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.072489   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.185143   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.197711   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.572704   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.685206   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.697839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.073656   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.185083   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.197443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.572739   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.685343   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.697853   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.072697   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.185630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.197928   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.572344   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.684814   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.698225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.073324   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.185254   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.198404   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.571987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.684709   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.698073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.072174   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.184688   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.198078   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.571798   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.685576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.698188   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.072810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.184683   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.198053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.574408   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.698415   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.072047   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.185423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.198010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.572968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.684217   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.697876   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.073276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.185372   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.197865   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.572327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.684929   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.698146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.073068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.185261   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.197596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.684379   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.697450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.072646   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.184810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.198157   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.684635   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.698108   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.073055   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.185325   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.572951   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.684268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.697542   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.073300   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.184458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.198058   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.571882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.684389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.698491   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.185150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.198444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.572557   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.686730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.697987   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.072389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.184902   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.198815   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.572090   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.684279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.072655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.185118   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.197515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.573029   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.684503   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.697942   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.073161   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.185394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.197824   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.572789   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.685536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.072248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.184713   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.198206   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.572681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.685404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.697732   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.073033   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.186532   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.197932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.573166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.684900   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.698494   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.072840   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.185112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.199554   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.573533   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.685513   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.698631   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.073563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.184668   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.198960   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.573373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.684371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.698226   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.072380   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.184889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.572427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.685015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.699053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.073225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.185241   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.198172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.572019   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.697511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.072382   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.185154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.198590   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.572333   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.688804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.699195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.072971   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.184395   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.197840   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.572457   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.684935   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.698247   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.072201   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.184817   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.198299   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.572603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.684807   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.698764   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.079460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.184783   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.198219   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.572155   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.684462   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.698249   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.071889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.185035   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.198639   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.572607   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.684993   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.698317   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.073167   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.187630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.197861   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.684449   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.698084   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.072598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.184553   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.198241   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.572543   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.685061   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.698066   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.072888   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.184279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.198475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.572908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.684166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.699214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.071396   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.185054   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.197274   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.571831   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.683617   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.073753   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.184818   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.198303   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.572754   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.685078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.697801   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.074144   12265 kapi.go:107] duration metric: took 1m59.00663205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:42.185287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.197975   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.685826   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.698484   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.185521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.197894   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.684695   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.698444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.184270   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.198072   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.686127   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.697760   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.184583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.197892   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.685284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.698273   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.197597   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.684852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.698234   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.185674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.197778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.684803   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.698286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.185195   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.197536   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.684936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.698202   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.185940   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.198354   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.685628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.698017   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.184172   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.197513   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.684563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.699121   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.185458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.197627   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.684548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.697728   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.184587   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.198088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.687284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.697762   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.185441   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.684856   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.698392   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.184966   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.198309   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.685059   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.697818   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.184799   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.199146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.685287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.697823   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.184982   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.198778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.684629   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.698010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.185306   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.197805   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.686354   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.697789   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.184048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.198685   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.685283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.697967   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.185357   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.198462   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.685857   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.698582   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.185027   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.199070   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.685248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.697584   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.444242   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.542180   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.684941   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.698345   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.184494   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.199673   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.686844   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.701197   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.186108   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.200286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.935418   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.936940   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.185837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.198343   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.685229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.697687   12265 kapi.go:107] duration metric: took 2m23.503933898s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:05.184162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.686162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.184784   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.685596   12265 kapi.go:107] duration metric: took 2m21.504550895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:06.687290   12265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-001438 cluster.
	I0916 10:25:06.688726   12265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:06.689940   12265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:06.691195   12265 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:06.692654   12265 addons.go:510] duration metric: took 2m34.356008246s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:06.692692   12265 start.go:246] waiting for cluster config update ...
	I0916 10:25:06.692714   12265 start.go:255] writing updated cluster config ...
	I0916 10:25:06.692960   12265 ssh_runner.go:195] Run: rm -f paused
	I0916 10:25:06.701459   12265 out.go:177] * Done! kubectl is now configured to use "addons-001438" cluster and "default" namespace by default
	E0916 10:25:06.702711   12265 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.933062702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482431933035960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5a532b7-6a1c-444f-bdf5-6e6bc7140085 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.933611481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7a13902-8f54-48b1-937d-2a319ce6e3c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.933689982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7a13902-8f54-48b1-937d-2a319ce6e3c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.934644031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7a13902-8f54-48b1-937d-2a319ce6e3c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.969997842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6fc328e-72b6-4130-babc-830889584e97 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.970085038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6fc328e-72b6-4130-babc-830889584e97 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.971412175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=811aa725-bd87-48fc-839e-471881d6bf4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.972569669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482431972543737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=811aa725-bd87-48fc-839e-471881d6bf4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.973151923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da81c83b-b79f-4fc2-ab2e-68d654d07b0e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.973223009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da81c83b-b79f-4fc2-ab2e-68d654d07b0e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:11 addons-001438 crio[662]: time="2024-09-16 10:27:11.974774472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da81c83b-b79f-4fc2-ab2e-68d654d07b0e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.018727752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a7382a3-0061-4667-b49a-762a1dddb5d1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.018802370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a7382a3-0061-4667-b49a-762a1dddb5d1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.019766704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f5d2e68-1898-4cde-aa48-390e65ec2709 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.020738226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482432020711535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f5d2e68-1898-4cde-aa48-390e65ec2709 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.021645288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bfb2c95-15cd-4d55-b0ce-609e2aba89fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.021829529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bfb2c95-15cd-4d55-b0ce-609e2aba89fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.022329336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bfb2c95-15cd-4d55-b0ce-609e2aba89fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.055462860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef5997bd-7f8c-4f38-a44a-1ce84a35168b name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.055555632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef5997bd-7f8c-4f38-a44a-1ce84a35168b name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.056983515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0815e705-65af-495a-b697-876708e41e8d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.057973303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482432057947654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0815e705-65af-495a-b697-876708e41e8d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.058667338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e97b8ef1-c1d8-45bc-ac20-49dacc812b30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.058729564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e97b8ef1-c1d8-45bc-ac20-49dacc812b30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:12 addons-001438 crio[662]: time="2024-09-16 10:27:12.059192254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e97b8ef1-c1d8-45bc-ac20-49dacc812b30 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c0c62d19fc341       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 2 minutes ago       Running             gcp-auth                                 0                   81638f0641649       gcp-auth-89d5ffd79-jg5wz
	4d9f00ee52087       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             2 minutes ago       Running             controller                               0                   f0a70a6b5b4fa       ingress-nginx-controller-bc57996ff-jhd4w
	a4ff4f2e6c350       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago       Running             csi-snapshotter                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	fa45fa1d889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	112e37da6f1b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	bcd9404de3e14       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	26165c7625a62       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	35e24c1abefe7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   bf02d50932f14       csi-hostpath-resizer-0
	a5edaf3e2dd3d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	b8ebd2f050729       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   f375334740e2f       csi-hostpath-attacher-0
	0d52d2269e100       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             3 minutes ago       Exited              patch                                    1                   6fe91ac2288fe       ingress-nginx-admission-patch-rls9n
	54c4347a1fc2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   3 minutes ago       Exited              create                                   0                   d66b1317412a7       ingress-nginx-admission-create-dk6l8
	f0bde3324c47d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   0eef20d1c6813       snapshot-controller-56fcc65765-pv2sr
	f786c20ceffe3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   ec33782f42717       snapshot-controller-56fcc65765-8nq94
	d997d75b48ee4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   173b48ab2ab7f       local-path-provisioner-86d989889c-rj67m
	0024bbca27aac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        3 minutes ago       Running             metrics-server                           0                   8bcb0a4a20a5a       metrics-server-84c5f94fbc-9hj9f
	e13f898473193       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               4 minutes ago       Running             cloud-spanner-emulator                   0                   c90a44c7edea8       cloud-spanner-emulator-769b77f747-58ll2
	8193aad1beb5b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             4 minutes ago       Running             minikube-ingress-dns                     0                   f1a3772ce5f7d       kube-ingress-dns-minikube
	20d2f3360f320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   748d363148f66       storage-provisioner
	63d270cbed8d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             4 minutes ago       Running             coredns                                  0                   42b8586a7b29a       coredns-7c65d6cfc9-j5ndn
	60269ac0552c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             4 minutes ago       Running             kube-proxy                               0                   2bf9dc368debd       kube-proxy-66flj
	1aabe5cb48f97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             4 minutes ago       Running             etcd                                     0                   f7aeaa11c7f4c       etcd-addons-001438
	2d34a4e3596c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             4 minutes ago       Running             kube-controller-manager                  0                   8a68216be6dee       kube-controller-manager-addons-001438
	bfff5b2d37985       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             4 minutes ago       Running             kube-apiserver                           0                   81f095a38dae1       kube-apiserver-addons-001438
	5a4816dc33e76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             4 minutes ago       Running             kube-scheduler                           0                   ec134844260ab       kube-scheduler-addons-001438
	
	
	==> coredns [63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce] <==
	[INFO] 127.0.0.1:32820 - 49588 "HINFO IN 5683833228926934535.5808779734602365342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027869673s
	[INFO] 10.244.0.7:47242 - 15842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000350783s
	[INFO] 10.244.0.7:47242 - 29412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155576s
	[INFO] 10.244.0.7:51495 - 23321 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115255s
	[INFO] 10.244.0.7:51495 - 47135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085371s
	[INFO] 10.244.0.7:40689 - 10301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114089s
	[INFO] 10.244.0.7:40689 - 30779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011843s
	[INFO] 10.244.0.7:53526 - 19539 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127604s
	[INFO] 10.244.0.7:53526 - 34381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109337s
	[INFO] 10.244.0.7:39182 - 43658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075802s
	[INFO] 10.244.0.7:39182 - 55433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031766s
	[INFO] 10.244.0.7:52628 - 35000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037386s
	[INFO] 10.244.0.7:52628 - 44218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027585s
	[INFO] 10.244.0.7:47656 - 61837 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028204s
	[INFO] 10.244.0.7:47656 - 39571 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027731s
	[INFO] 10.244.0.7:53964 - 36235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098663s
	[INFO] 10.244.0.7:53964 - 55690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045022s
	[INFO] 10.244.0.22:49146 - 11336 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543634s
	[INFO] 10.244.0.22:44900 - 51750 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125434s
	[INFO] 10.244.0.22:47266 - 27362 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158517s
	[INFO] 10.244.0.22:53077 - 63050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068888s
	[INFO] 10.244.0.22:52796 - 34381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101059s
	[INFO] 10.244.0.22:52167 - 15594 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126468s
	[INFO] 10.244.0.22:42107 - 54869 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149176s
	[INFO] 10.244.0.22:60865 - 20616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006078154s
	
	
	==> describe nodes <==
	Name:               addons-001438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-001438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-001438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-001438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-001438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-001438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:27:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-001438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b69a913a20a4259950d0bf801229c28
	  System UUID:                8b69a913-a20a-4259-950d-0bf801229c28
	  Boot ID:                    7d21de27-dd4e-4002-9fc0-df14a0ff761f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-58ll2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  gcp-auth                    gcp-auth-89d5ffd79-jg5wz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jhd4w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m32s
	  kube-system                 coredns-7c65d6cfc9-j5ndn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m39s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 csi-hostpathplugin-xgk62                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 etcd-addons-001438                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m45s
	  kube-system                 kube-apiserver-addons-001438                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-001438       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-proxy-66flj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-001438                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-84c5f94fbc-9hj9f             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m34s
	  kube-system                 snapshot-controller-56fcc65765-8nq94        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 snapshot-controller-56fcc65765-pv2sr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  local-path-storage          local-path-provisioner-86d989889c-rj67m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jnpkm              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m36s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m44s  kubelet          Node addons-001438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s  kubelet          Node addons-001438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s  kubelet          Node addons-001438 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m43s  kubelet          Node addons-001438 status is now: NodeReady
	  Normal  RegisteredNode           4m40s  node-controller  Node addons-001438 event: Registered Node addons-001438 in Controller
	
	
	==> dmesg <==
	[  +0.270363] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.002627] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.196359] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +0.061696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999876] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.091472] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.774952] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +1.497885] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.466780] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.018877] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.254117] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.876932] kauditd_printk_skb: 7 callbacks suppressed
	[ +33.888489] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.263650] kauditd_printk_skb: 76 callbacks suppressed
	[ +48.109785] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 10:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.297596] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.818881] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.121137] kauditd_printk_skb: 19 callbacks suppressed
	[ +29.616490] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.276540] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:27] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84] <==
	{"level":"info","ts":"2024-09-16T10:25:01.423722Z","caller":"traceutil/trace.go:171","msg":"trace[1526018823] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"284.258855ms","start":"2024-09-16T10:25:01.139452Z","end":"2024-09-16T10:25:01.423711Z","steps":["trace[1526018823] 'process raft request'  (duration: 284.165558ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.424593Z","caller":"traceutil/trace.go:171","msg":"trace[1620023283] linearizableReadLoop","detail":"{readStateIndex:1296; appliedIndex:1296; }","duration":"253.838283ms","start":"2024-09-16T10:25:01.170745Z","end":"2024-09-16T10:25:01.424583Z","steps":["trace[1620023283] 'read index received'  (duration: 253.835456ms)","trace[1620023283] 'applied index is now lower than readState.Index'  (duration: 2.263µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:01.424681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.948565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.424719Z","caller":"traceutil/trace.go:171","msg":"trace[1658095100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"253.992891ms","start":"2024-09-16T10:25:01.170719Z","end":"2024-09-16T10:25:01.424712Z","steps":["trace[1658095100] 'agreement among raft nodes before linearized reading'  (duration: 253.933158ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.430878Z","caller":"traceutil/trace.go:171","msg":"trace[196824448] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"219.615242ms","start":"2024-09-16T10:25:01.211190Z","end":"2024-09-16T10:25:01.430805Z","steps":["trace[196824448] 'process raft request'  (duration: 217.799649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.432286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.218738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.432549Z","caller":"traceutil/trace.go:171","msg":"trace[1250515915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"248.433899ms","start":"2024-09-16T10:25:01.183901Z","end":"2024-09-16T10:25:01.432335Z","steps":["trace[1250515915] 'agreement among raft nodes before linearized reading'  (duration: 246.789324ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917472Z","caller":"traceutil/trace.go:171","msg":"trace[1132617141] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"256.411132ms","start":"2024-09-16T10:25:03.661047Z","end":"2024-09-16T10:25:03.917458Z","steps":["trace[1132617141] 'read index received'  (duration: 256.216658ms)","trace[1132617141] 'applied index is now lower than readState.Index'  (duration: 193.873µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:03.917646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.564415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917689Z","caller":"traceutil/trace.go:171","msg":"trace[1681803764] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1254; }","duration":"256.635309ms","start":"2024-09-16T10:25:03.661043Z","end":"2024-09-16T10:25:03.917678Z","steps":["trace[1681803764] 'agreement among raft nodes before linearized reading'  (duration: 256.524591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.498369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917721Z","caller":"traceutil/trace.go:171","msg":"trace[320039730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"246.52737ms","start":"2024-09-16T10:25:03.671187Z","end":"2024-09-16T10:25:03.917715Z","steps":["trace[320039730] 'agreement among raft nodes before linearized reading'  (duration: 246.484981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.395252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917834Z","caller":"traceutil/trace.go:171","msg":"trace[699037525] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"461.96825ms","start":"2024-09-16T10:25:03.455860Z","end":"2024-09-16T10:25:03.917828Z","steps":["trace[699037525] 'process raft request'  (duration: 461.454179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917838Z","caller":"traceutil/trace.go:171","msg":"trace[618256897] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"234.40851ms","start":"2024-09-16T10:25:03.683425Z","end":"2024-09-16T10:25:03.917833Z","steps":["trace[618256897] 'agreement among raft nodes before linearized reading'  (duration: 234.386479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:03.455845Z","time spent":"462.003063ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.523876Z","caller":"traceutil/trace.go:171","msg":"trace[565706559] transaction","detail":"{read_only:false; response_revision:1399; number_of_response:1; }","duration":"393.956218ms","start":"2024-09-16T10:25:42.129887Z","end":"2024-09-16T10:25:42.523844Z","steps":["trace[565706559] 'process raft request'  (duration: 393.821788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.524080Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.129864Z","time spent":"394.119545ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1398 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.533976Z","caller":"traceutil/trace.go:171","msg":"trace[668376333] linearizableReadLoop","detail":"{readStateIndex:1459; appliedIndex:1458; }","duration":"302.69985ms","start":"2024-09-16T10:25:42.231262Z","end":"2024-09-16T10:25:42.533962Z","steps":["trace[668376333] 'read index received'  (duration: 293.491454ms)","trace[668376333] 'applied index is now lower than readState.Index'  (duration: 9.207628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:42.535969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.605451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-09-16T10:25:42.536065Z","caller":"traceutil/trace.go:171","msg":"trace[19888550] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1400; }","duration":"205.726154ms","start":"2024-09-16T10:25:42.330329Z","end":"2024-09-16T10:25:42.536056Z","steps":["trace[19888550] 'agreement among raft nodes before linearized reading'  (duration: 205.527055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.536191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.924785ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:42.536244Z","caller":"traceutil/trace.go:171","msg":"trace[1740705082] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1400; }","duration":"304.971706ms","start":"2024-09-16T10:25:42.231257Z","end":"2024-09-16T10:25:42.536228Z","steps":["trace[1740705082] 'agreement among raft nodes before linearized reading'  (duration: 304.915956ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:42.537030Z","caller":"traceutil/trace.go:171","msg":"trace[778126279] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"337.225123ms","start":"2024-09-16T10:25:42.199749Z","end":"2024-09-16T10:25:42.536974Z","steps":["trace[778126279] 'process raft request'  (duration: 333.931313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.537228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.199733Z","time spent":"337.391985ms","remote":"127.0.0.1:51498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-001438\" mod_revision:1384 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-001438\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-001438\" > >"}
	
	
	==> gcp-auth [c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7] <==
	2024/09/16 10:25:06 GCP Auth Webhook started!
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	
	
	==> kernel <==
	 10:27:12 up 5 min,  0 users,  load average: 0.65, 0.88, 0.46
	Linux addons-001438 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77] <==
	I0916 10:22:40.932409       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0916 10:22:42.426039       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.146.100"}
	I0916 10:22:42.456409       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:22:42.660969       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.102.193"}
	I0916 10:22:44.945009       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.134.141"}
	W0916 10:23:38.948410       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.948711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:23:38.949896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:23:38.958493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.958543       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 10:23:38.959752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0916 10:24:18.395108       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:18.395300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:18.395442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 10:24:18.398244       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	I0916 10:24:18.453414       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:25:09.633337       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.80.80"}
	I0916 10:27:07.962789       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:27:08.990230       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3] <==
	I0916 10:25:09.687063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="25.765664ms"
	E0916 10:25:09.687144       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0916 10:25:09.731163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.235103ms"
	I0916 10:25:09.753608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="22.282725ms"
	I0916 10:25:09.753862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="122.927µs"
	I0916 10:25:09.762905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.16µs"
	I0916 10:25:16.878158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="16.26286ms"
	I0916 10:25:16.878254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="50.754µs"
	I0916 10:25:19.390322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.132µs"
	I0916 10:25:32.259505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:25:42.895965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="3.388638ms"
	I0916 10:25:42.934221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="14.56657ms"
	I0916 10:25:42.935951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="80.433µs"
	I0916 10:25:50.249420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="66.204µs"
	I0916 10:25:52.859393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="64.229µs"
	I0916 10:26:00.384466       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0916 10:26:02.877788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:26:05.861778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="51.109µs"
	I0916 10:27:00.169838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="5.547µs"
	I0916 10:27:04.861176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="105.111µs"
	E0916 10:27:08.992417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:27:10.141337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:10.141432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:27:11.909800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:11.909886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:22:35.282699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:22:35.409784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0916 10:22:35.409847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:22:36.135283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:22:36.135476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:22:36.135545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:22:36.146626       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:22:36.146849       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:22:36.146861       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:22:36.156579       1 config.go:199] "Starting service config controller"
	I0916 10:22:36.156604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:22:36.166809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:22:36.166838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:22:36.168180       1 config.go:328] "Starting node config controller"
	I0916 10:22:36.168189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:22:36.258515       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:22:36.268518       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:22:36.268639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237] <==
	W0916 10:22:25.363221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:25.363254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:22:25.363420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:22:25.363573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:22:25.363425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:25.363941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.174422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:22:26.174473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.225213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:26.225308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.333904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:22:26.333957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.350221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:22:26.350326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.406843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:26.406982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.446248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:22:26.446395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.547116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:22:26.547206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.704254       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:22:26.704303       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:22:28.953769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:27:08 addons-001438 kubelet[1200]: E0916 10:27:08.158094    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482428157268845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:08 addons-001438 kubelet[1200]: E0916 10:27:08.158140    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482428157268845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194534    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-bpffs\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194595    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-modules\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194612    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-debugfs\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194776    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-modules" (OuterVolumeSpecName: "modules") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194806    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-bpffs" (OuterVolumeSpecName: "bpffs") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194818    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-debugfs" (OuterVolumeSpecName: "debugfs") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194853    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-cgroup\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194873    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-run\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194936    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4vm\" (UniqueName: \"kubernetes.io/projected/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-kube-api-access-sg4vm\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194955    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-host\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195030    1200 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-modules\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195040    1200 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-bpffs\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195047    1200 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-debugfs\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195064    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-host" (OuterVolumeSpecName: "host") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195081    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-cgroup" (OuterVolumeSpecName: "cgroup") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195094    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-run" (OuterVolumeSpecName: "run") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.201062    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-kube-api-access-sg4vm" (OuterVolumeSpecName: "kube-api-access-sg4vm") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "kube-api-access-sg4vm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295528    1200 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-cgroup\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295562    1200 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-run\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295573    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sg4vm\" (UniqueName: \"kubernetes.io/projected/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-kube-api-access-sg4vm\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295602    1200 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-host\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.448138    1200 scope.go:117] "RemoveContainer" containerID="44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f"
	Sep 16 10:27:09 addons-001438 kubelet[1200]: I0916 10:27:09.843635    1200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" path="/var/lib/kubelet/pods/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a/volumes"
	
	
	==> storage-provisioner [20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e] <==
	I0916 10:22:41.307950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:22:41.369058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:22:41.369154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:22:41.391597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:22:41.391782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	I0916 10:22:41.394290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b3cde4-08a8-47d7-a9cc-7251679ab4d1", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b became leader
	I0916 10:22:41.492688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (337.782µs)
helpers_test.go:263: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Ingress (2.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (316.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.706495ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003964968s
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (464.999µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (374.724µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (395.384µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (341.927µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (501.249µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (378.054µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (397.115µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (398.074µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (411.382µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (469.056µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (448.573µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (456.894µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-001438 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-001438 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (449.083µs)
addons_test.go:431: failed checking metric server: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-001438 -n addons-001438
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 logs -n 25: (1.50745221s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | --download-only -p                   | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-928489                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42715               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-928489              | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                  | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| start   | -p addons-001438 --wait=true         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	| ip      | addons-001438 ip                     | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | addons-001438 addons                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:42.990297   12265 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:42.990427   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990438   12265 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:42.990444   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990619   12265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:42.991237   12265 out.go:352] Setting JSON to false
	I0916 10:21:42.992075   12265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":253,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:42.992165   12265 start.go:139] virtualization: kvm guest
	I0916 10:21:42.994057   12265 out.go:177] * [addons-001438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:42.995363   12265 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:21:42.995366   12265 notify.go:220] Checking for updates...
	I0916 10:21:42.996620   12265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:42.997884   12265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:42.999244   12265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.000448   12265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:21:43.001744   12265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:21:43.003140   12265 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:43.035292   12265 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:21:43.036591   12265 start.go:297] selected driver: kvm2
	I0916 10:21:43.036604   12265 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:43.036617   12265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:21:43.037618   12265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.037687   12265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:43.052612   12265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:43.052654   12265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:43.052880   12265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:21:43.052910   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:21:43.052948   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:43.052956   12265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:43.053000   12265 start.go:340] cluster config:
	{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:43.053089   12265 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.054779   12265 out.go:177] * Starting "addons-001438" primary control-plane node in "addons-001438" cluster
	I0916 10:21:43.056048   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:21:43.056073   12265 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:43.056099   12265 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:43.056171   12265 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:21:43.056181   12265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:21:43.056464   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:21:43.056479   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json: {Name:mke7feffe145119f1110e818375562c2195d4fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:21:43.056601   12265 start.go:360] acquireMachinesLock for addons-001438: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:21:43.056638   12265 start.go:364] duration metric: took 25.099µs to acquireMachinesLock for "addons-001438"
	I0916 10:21:43.056653   12265 start.go:93] Provisioning new machine with config: &{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:21:43.056703   12265 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:21:43.058226   12265 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:21:43.058340   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:21:43.058376   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:21:43.072993   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0916 10:21:43.073475   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:21:43.073995   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:21:43.074020   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:21:43.074422   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:21:43.074620   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:21:43.074787   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:21:43.074946   12265 start.go:159] libmachine.API.Create for "addons-001438" (driver="kvm2")
	I0916 10:21:43.074989   12265 client.go:168] LocalClient.Create starting
	I0916 10:21:43.075021   12265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:21:43.311518   12265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:21:43.475888   12265 main.go:141] libmachine: Running pre-create checks...
	I0916 10:21:43.475917   12265 main.go:141] libmachine: (addons-001438) Calling .PreCreateCheck
	I0916 10:21:43.476396   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:21:43.476796   12265 main.go:141] libmachine: Creating machine...
	I0916 10:21:43.476809   12265 main.go:141] libmachine: (addons-001438) Calling .Create
	I0916 10:21:43.476954   12265 main.go:141] libmachine: (addons-001438) Creating KVM machine...
	I0916 10:21:43.478137   12265 main.go:141] libmachine: (addons-001438) DBG | found existing default KVM network
	I0916 10:21:43.478893   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.478751   12287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0916 10:21:43.478937   12265 main.go:141] libmachine: (addons-001438) DBG | created network xml: 
	I0916 10:21:43.478958   12265 main.go:141] libmachine: (addons-001438) DBG | <network>
	I0916 10:21:43.478967   12265 main.go:141] libmachine: (addons-001438) DBG |   <name>mk-addons-001438</name>
	I0916 10:21:43.478974   12265 main.go:141] libmachine: (addons-001438) DBG |   <dns enable='no'/>
	I0916 10:21:43.478986   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.478998   12265 main.go:141] libmachine: (addons-001438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:21:43.479006   12265 main.go:141] libmachine: (addons-001438) DBG |     <dhcp>
	I0916 10:21:43.479018   12265 main.go:141] libmachine: (addons-001438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:21:43.479026   12265 main.go:141] libmachine: (addons-001438) DBG |     </dhcp>
	I0916 10:21:43.479036   12265 main.go:141] libmachine: (addons-001438) DBG |   </ip>
	I0916 10:21:43.479087   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.479109   12265 main.go:141] libmachine: (addons-001438) DBG | </network>
	I0916 10:21:43.479150   12265 main.go:141] libmachine: (addons-001438) DBG | 
	I0916 10:21:43.484546   12265 main.go:141] libmachine: (addons-001438) DBG | trying to create private KVM network mk-addons-001438 192.168.39.0/24...
	I0916 10:21:43.547822   12265 main.go:141] libmachine: (addons-001438) DBG | private KVM network mk-addons-001438 192.168.39.0/24 created
	I0916 10:21:43.547845   12265 main.go:141] libmachine: (addons-001438) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.547862   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.547813   12287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.547875   12265 main.go:141] libmachine: (addons-001438) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:43.547936   12265 main.go:141] libmachine: (addons-001438) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:21:43.797047   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.796916   12287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa...
	I0916 10:21:43.906021   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.905909   12287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk...
	I0916 10:21:43.906051   12265 main.go:141] libmachine: (addons-001438) DBG | Writing magic tar header
	I0916 10:21:43.906060   12265 main.go:141] libmachine: (addons-001438) DBG | Writing SSH key tar header
	I0916 10:21:43.906067   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.906027   12287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.906123   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438
	I0916 10:21:43.906172   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 (perms=drwx------)
	I0916 10:21:43.906194   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:21:43.906204   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:21:43.906222   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:21:43.906230   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.906236   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:21:43.906243   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:21:43.906248   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:21:43.906258   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:43.906264   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:21:43.906275   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:21:43.906309   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:21:43.906325   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home
	I0916 10:21:43.906338   12265 main.go:141] libmachine: (addons-001438) DBG | Skipping /home - not owner
	I0916 10:21:43.907204   12265 main.go:141] libmachine: (addons-001438) define libvirt domain using xml: 
	I0916 10:21:43.907223   12265 main.go:141] libmachine: (addons-001438) <domain type='kvm'>
	I0916 10:21:43.907235   12265 main.go:141] libmachine: (addons-001438)   <name>addons-001438</name>
	I0916 10:21:43.907246   12265 main.go:141] libmachine: (addons-001438)   <memory unit='MiB'>4000</memory>
	I0916 10:21:43.907255   12265 main.go:141] libmachine: (addons-001438)   <vcpu>2</vcpu>
	I0916 10:21:43.907265   12265 main.go:141] libmachine: (addons-001438)   <features>
	I0916 10:21:43.907274   12265 main.go:141] libmachine: (addons-001438)     <acpi/>
	I0916 10:21:43.907282   12265 main.go:141] libmachine: (addons-001438)     <apic/>
	I0916 10:21:43.907294   12265 main.go:141] libmachine: (addons-001438)     <pae/>
	I0916 10:21:43.907307   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907318   12265 main.go:141] libmachine: (addons-001438)   </features>
	I0916 10:21:43.907327   12265 main.go:141] libmachine: (addons-001438)   <cpu mode='host-passthrough'>
	I0916 10:21:43.907337   12265 main.go:141] libmachine: (addons-001438)   
	I0916 10:21:43.907349   12265 main.go:141] libmachine: (addons-001438)   </cpu>
	I0916 10:21:43.907364   12265 main.go:141] libmachine: (addons-001438)   <os>
	I0916 10:21:43.907373   12265 main.go:141] libmachine: (addons-001438)     <type>hvm</type>
	I0916 10:21:43.907383   12265 main.go:141] libmachine: (addons-001438)     <boot dev='cdrom'/>
	I0916 10:21:43.907392   12265 main.go:141] libmachine: (addons-001438)     <boot dev='hd'/>
	I0916 10:21:43.907402   12265 main.go:141] libmachine: (addons-001438)     <bootmenu enable='no'/>
	I0916 10:21:43.907415   12265 main.go:141] libmachine: (addons-001438)   </os>
	I0916 10:21:43.907427   12265 main.go:141] libmachine: (addons-001438)   <devices>
	I0916 10:21:43.907435   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='cdrom'>
	I0916 10:21:43.907452   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/boot2docker.iso'/>
	I0916 10:21:43.907463   12265 main.go:141] libmachine: (addons-001438)       <target dev='hdc' bus='scsi'/>
	I0916 10:21:43.907489   12265 main.go:141] libmachine: (addons-001438)       <readonly/>
	I0916 10:21:43.907508   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907518   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='disk'>
	I0916 10:21:43.907531   12265 main.go:141] libmachine: (addons-001438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:21:43.907547   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk'/>
	I0916 10:21:43.907558   12265 main.go:141] libmachine: (addons-001438)       <target dev='hda' bus='virtio'/>
	I0916 10:21:43.907568   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907583   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907595   12265 main.go:141] libmachine: (addons-001438)       <source network='mk-addons-001438'/>
	I0916 10:21:43.907606   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907616   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907624   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907634   12265 main.go:141] libmachine: (addons-001438)       <source network='default'/>
	I0916 10:21:43.907645   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907667   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907687   12265 main.go:141] libmachine: (addons-001438)     <serial type='pty'>
	I0916 10:21:43.907697   12265 main.go:141] libmachine: (addons-001438)       <target port='0'/>
	I0916 10:21:43.907706   12265 main.go:141] libmachine: (addons-001438)     </serial>
	I0916 10:21:43.907717   12265 main.go:141] libmachine: (addons-001438)     <console type='pty'>
	I0916 10:21:43.907735   12265 main.go:141] libmachine: (addons-001438)       <target type='serial' port='0'/>
	I0916 10:21:43.907745   12265 main.go:141] libmachine: (addons-001438)     </console>
	I0916 10:21:43.907758   12265 main.go:141] libmachine: (addons-001438)     <rng model='virtio'>
	I0916 10:21:43.907772   12265 main.go:141] libmachine: (addons-001438)       <backend model='random'>/dev/random</backend>
	I0916 10:21:43.907777   12265 main.go:141] libmachine: (addons-001438)     </rng>
	I0916 10:21:43.907785   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907794   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907804   12265 main.go:141] libmachine: (addons-001438)   </devices>
	I0916 10:21:43.907814   12265 main.go:141] libmachine: (addons-001438) </domain>
	I0916 10:21:43.907826   12265 main.go:141] libmachine: (addons-001438) 
	I0916 10:21:43.913322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:98:e7:17 in network default
	I0916 10:21:43.913924   12265 main.go:141] libmachine: (addons-001438) Ensuring networks are active...
	I0916 10:21:43.913942   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:43.914588   12265 main.go:141] libmachine: (addons-001438) Ensuring network default is active
	I0916 10:21:43.914879   12265 main.go:141] libmachine: (addons-001438) Ensuring network mk-addons-001438 is active
	I0916 10:21:43.915337   12265 main.go:141] libmachine: (addons-001438) Getting domain xml...
	I0916 10:21:43.915987   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:45.289678   12265 main.go:141] libmachine: (addons-001438) Waiting to get IP...
	I0916 10:21:45.290387   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.290811   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.290836   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.290776   12287 retry.go:31] will retry after 253.823507ms: waiting for machine to come up
	I0916 10:21:45.546308   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.546737   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.546757   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.546713   12287 retry.go:31] will retry after 316.98215ms: waiting for machine to come up
	I0916 10:21:45.865275   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.865712   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.865742   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.865673   12287 retry.go:31] will retry after 438.875906ms: waiting for machine to come up
	I0916 10:21:46.306361   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.306829   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.306854   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.306787   12287 retry.go:31] will retry after 378.922529ms: waiting for machine to come up
	I0916 10:21:46.687272   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.687683   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.687718   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.687648   12287 retry.go:31] will retry after 695.664658ms: waiting for machine to come up
	I0916 10:21:47.384623   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:47.385017   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:47.385044   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:47.384985   12287 retry.go:31] will retry after 669.1436ms: waiting for machine to come up
	I0916 10:21:48.056603   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.057159   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.057183   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.057099   12287 retry.go:31] will retry after 739.217064ms: waiting for machine to come up
	I0916 10:21:48.798348   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.798788   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.798824   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.798748   12287 retry.go:31] will retry after 963.828739ms: waiting for machine to come up
	I0916 10:21:49.763677   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:49.764095   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:49.764120   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:49.764043   12287 retry.go:31] will retry after 1.625531991s: waiting for machine to come up
	I0916 10:21:51.391980   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:51.392322   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:51.392343   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:51.392285   12287 retry.go:31] will retry after 1.960554167s: waiting for machine to come up
	I0916 10:21:53.354469   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:53.354989   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:53.355016   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:53.354937   12287 retry.go:31] will retry after 2.035806393s: waiting for machine to come up
	I0916 10:21:55.393065   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:55.393432   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:55.393451   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:55.393400   12287 retry.go:31] will retry after 3.028756428s: waiting for machine to come up
	I0916 10:21:58.424174   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:58.424544   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:58.424577   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:58.424517   12287 retry.go:31] will retry after 3.769682763s: waiting for machine to come up
	I0916 10:22:02.198084   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:02.198470   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:22:02.198492   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:22:02.198430   12287 retry.go:31] will retry after 5.547519077s: waiting for machine to come up
	I0916 10:22:07.750830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751191   12265 main.go:141] libmachine: (addons-001438) Found IP for machine: 192.168.39.72
	I0916 10:22:07.751209   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has current primary IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751215   12265 main.go:141] libmachine: (addons-001438) Reserving static IP address...
	I0916 10:22:07.751548   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "addons-001438", mac: "52:54:00:9c:55:19", ip: "192.168.39.72"} in network mk-addons-001438
	I0916 10:22:07.821469   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:07.821506   12265 main.go:141] libmachine: (addons-001438) Reserved static IP address: 192.168.39.72
	I0916 10:22:07.821523   12265 main.go:141] libmachine: (addons-001438) Waiting for SSH to be available...
	I0916 10:22:07.823797   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.824029   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438
	I0916 10:22:07.824057   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find defined IP address of network mk-addons-001438 interface with MAC address 52:54:00:9c:55:19
	I0916 10:22:07.824199   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:07.824226   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:07.824261   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:07.824273   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:07.824297   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:07.835394   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: exit status 255: 
	I0916 10:22:07.835415   12265 main.go:141] libmachine: (addons-001438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 10:22:07.835421   12265 main.go:141] libmachine: (addons-001438) DBG | command : exit 0
	I0916 10:22:07.835428   12265 main.go:141] libmachine: (addons-001438) DBG | err     : exit status 255
	I0916 10:22:07.835435   12265 main.go:141] libmachine: (addons-001438) DBG | output  : 
	I0916 10:22:10.838181   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:10.840410   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840805   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.840830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840953   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:10.840980   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:10.841012   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:10.841026   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:10.841039   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:10.969218   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: <nil>: 
	I0916 10:22:10.969498   12265 main.go:141] libmachine: (addons-001438) KVM machine creation complete!
	I0916 10:22:10.969791   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:10.970351   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970568   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970704   12265 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:22:10.970716   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:10.971844   12265 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:22:10.971857   12265 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:22:10.971863   12265 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:22:10.971871   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:10.973963   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974287   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.974322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974443   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:10.974600   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974766   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974897   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:10.975056   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:10.975258   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:10.975270   12265 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:22:11.084303   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.084322   12265 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:22:11.084329   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.086985   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087399   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.087449   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087637   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.087805   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.087957   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.088052   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.088212   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.088404   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.088420   12265 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:22:11.197622   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:22:11.197666   12265 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:22:11.197674   12265 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:22:11.197683   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.197922   12265 buildroot.go:166] provisioning hostname "addons-001438"
	I0916 10:22:11.197936   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.198131   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.200614   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.200955   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.200988   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.201100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.201269   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201396   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201536   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.201681   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.201878   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.201891   12265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-001438 && echo "addons-001438" | sudo tee /etc/hostname
	I0916 10:22:11.329393   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-001438
	
	I0916 10:22:11.329423   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.332085   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332370   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.332397   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332557   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.332746   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332868   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332999   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.333118   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.333336   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.333353   12265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-001438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-001438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-001438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:22:11.454462   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.454486   12265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:22:11.454539   12265 buildroot.go:174] setting up certificates
	I0916 10:22:11.454553   12265 provision.go:84] configureAuth start
	I0916 10:22:11.454562   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.454823   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:11.457458   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.457872   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.457902   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.458065   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.460166   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460456   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.460484   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460579   12265 provision.go:143] copyHostCerts
	I0916 10:22:11.460674   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:22:11.460835   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:22:11.460925   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:22:11.460997   12265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.addons-001438 san=[127.0.0.1 192.168.39.72 addons-001438 localhost minikube]
	I0916 10:22:11.639072   12265 provision.go:177] copyRemoteCerts
	I0916 10:22:11.639141   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:22:11.639169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.641767   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642050   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.642076   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642240   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.642415   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.642519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.642635   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:11.727509   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:22:11.752436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:22:11.776436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:22:11.799597   12265 provision.go:87] duration metric: took 345.032702ms to configureAuth
	I0916 10:22:11.799626   12265 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:22:11.799813   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:11.799904   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.802386   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.802700   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802854   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.803047   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803187   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803323   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.803504   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.803689   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.803704   12265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:22:12.030350   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:22:12.030374   12265 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:22:12.030382   12265 main.go:141] libmachine: (addons-001438) Calling .GetURL
	I0916 10:22:12.031607   12265 main.go:141] libmachine: (addons-001438) DBG | Using libvirt version 6000000
	I0916 10:22:12.034008   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034296   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.034325   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034451   12265 main.go:141] libmachine: Docker is up and running!
	I0916 10:22:12.034463   12265 main.go:141] libmachine: Reticulating splines...
	I0916 10:22:12.034470   12265 client.go:171] duration metric: took 28.959474569s to LocalClient.Create
	I0916 10:22:12.034491   12265 start.go:167] duration metric: took 28.959547297s to libmachine.API.Create "addons-001438"
	I0916 10:22:12.034500   12265 start.go:293] postStartSetup for "addons-001438" (driver="kvm2")
	I0916 10:22:12.034509   12265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:22:12.034535   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.034731   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:22:12.034762   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.036747   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037041   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.037068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037200   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.037344   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.037486   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.037623   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.123403   12265 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:22:12.127815   12265 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:22:12.127838   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:22:12.127904   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:22:12.127926   12265 start.go:296] duration metric: took 93.420957ms for postStartSetup
	I0916 10:22:12.127955   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:12.128519   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.131232   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131510   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.131547   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131776   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:22:12.131949   12265 start.go:128] duration metric: took 29.075237515s to createHost
	I0916 10:22:12.131975   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.133967   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134281   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.134305   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134418   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.134606   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134753   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134877   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.135036   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:12.135185   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:12.135202   12265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:22:12.245734   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482132.226578519
	
	I0916 10:22:12.245757   12265 fix.go:216] guest clock: 1726482132.226578519
	I0916 10:22:12.245764   12265 fix.go:229] Guest: 2024-09-16 10:22:12.226578519 +0000 UTC Remote: 2024-09-16 10:22:12.131960304 +0000 UTC m=+29.174301435 (delta=94.618215ms)
	I0916 10:22:12.245784   12265 fix.go:200] guest clock delta is within tolerance: 94.618215ms
	I0916 10:22:12.245790   12265 start.go:83] releasing machines lock for "addons-001438", held for 29.189143417s
	I0916 10:22:12.245809   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.246014   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.248419   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248678   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.248704   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248832   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249314   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249485   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249586   12265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:22:12.249653   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.249707   12265 ssh_runner.go:195] Run: cat /version.json
	I0916 10:22:12.249728   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.252249   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252497   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252634   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252657   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252757   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.252904   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252922   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.252925   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.253038   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.253093   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253241   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.253258   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.253386   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253515   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.362639   12265 ssh_runner.go:195] Run: systemctl --version
	I0916 10:22:12.368512   12265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:22:12.527002   12265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:22:12.532733   12265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:22:12.532791   12265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:22:12.548743   12265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:22:12.548773   12265 start.go:495] detecting cgroup driver to use...
	I0916 10:22:12.548843   12265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:22:12.564219   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:22:12.578224   12265 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:22:12.578276   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:22:12.591434   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:22:12.604674   12265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:22:12.712713   12265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:22:12.868881   12265 docker.go:233] disabling docker service ...
	I0916 10:22:12.868945   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:22:12.883262   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:22:12.896034   12265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:22:13.009183   12265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:22:13.123591   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:22:13.137411   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:22:13.155768   12265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:22:13.155832   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.166378   12265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:22:13.166436   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.177199   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.187753   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.198460   12265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:22:13.209356   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.220222   12265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.237721   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.247992   12265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:22:13.257214   12265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:22:13.257274   12265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:22:13.269843   12265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:22:13.279361   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:13.392424   12265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:22:13.489919   12265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:22:13.490002   12265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:22:13.495269   12265 start.go:563] Will wait 60s for crictl version
	I0916 10:22:13.495342   12265 ssh_runner.go:195] Run: which crictl
	I0916 10:22:13.499375   12265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:22:13.543037   12265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:22:13.543161   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.571422   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.600892   12265 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:22:13.602164   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:13.604725   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605053   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:13.605090   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605239   12265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:22:13.609153   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:13.621451   12265 kubeadm.go:883] updating cluster {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:22:13.621560   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:13.621616   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:13.653616   12265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:22:13.653695   12265 ssh_runner.go:195] Run: which lz4
	I0916 10:22:13.657722   12265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:22:13.661843   12265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:22:13.661873   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:22:14.968986   12265 crio.go:462] duration metric: took 1.311298771s to copy over tarball
	I0916 10:22:14.969053   12265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:22:17.073836   12265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104757919s)
	I0916 10:22:17.073872   12265 crio.go:469] duration metric: took 2.104858266s to extract the tarball
	I0916 10:22:17.073881   12265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:22:17.110316   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:17.150207   12265 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:22:17.150233   12265 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:22:17.150241   12265 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.1 crio true true} ...
	I0916 10:22:17.150343   12265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-001438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:22:17.150424   12265 ssh_runner.go:195] Run: crio config
	I0916 10:22:17.195725   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:17.195746   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:17.195756   12265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:22:17.195774   12265 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-001438 NodeName:addons-001438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:22:17.195915   12265 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-001438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:22:17.195969   12265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:22:17.206079   12265 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:22:17.206139   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:22:17.215719   12265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 10:22:17.232125   12265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:22:17.248126   12265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:22:17.264165   12265 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0916 10:22:17.267727   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:17.279787   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:17.393283   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:17.410756   12265 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438 for IP: 192.168.39.72
	I0916 10:22:17.410774   12265 certs.go:194] generating shared ca certs ...
	I0916 10:22:17.410794   12265 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.410949   12265 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:22:17.480758   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt ...
	I0916 10:22:17.480787   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt: {Name:mkc291c3a986acc7f4de9183c4ef6d249d8de5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.480965   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key ...
	I0916 10:22:17.480980   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key: {Name:mk56bc8b146d891ba5f741ad0bd339fffdb85989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.481075   12265 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:22:17.673219   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt ...
	I0916 10:22:17.673250   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt: {Name:mk8d6878492eab0d99f630fc495324e3b843781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673403   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key ...
	I0916 10:22:17.673414   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key: {Name:mk082b50320d253da8f01ad2454b69492e000fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673482   12265 certs.go:256] generating profile certs ...
	I0916 10:22:17.673531   12265 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key
	I0916 10:22:17.673544   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt with IP's: []
	I0916 10:22:17.921779   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt ...
	I0916 10:22:17.921811   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: {Name:mk9172b9e8f20da0dd399e583d4f0391784c25bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.921970   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key ...
	I0916 10:22:17.921981   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key: {Name:mk65d84f1710f9ab616402324cb2a91f749aa3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.922048   12265 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03
	I0916 10:22:17.922066   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0916 10:22:17.984449   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 ...
	I0916 10:22:17.984473   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03: {Name:mk697c0092db030ad4df50333f6d1db035d298e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984627   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 ...
	I0916 10:22:17.984638   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03: {Name:mkf74035add612ea1883fde9b662a919a8d7c5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984705   12265 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt
	I0916 10:22:17.984774   12265 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key
	I0916 10:22:17.984818   12265 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key
	I0916 10:22:17.984834   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt with IP's: []
	I0916 10:22:18.105094   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt ...
	I0916 10:22:18.105122   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt: {Name:mk12379583893d02aa599284bf7c2e673e4a585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105290   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key ...
	I0916 10:22:18.105300   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key: {Name:mkddc10c89aa36609a41c940a83606fa36ac69df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105453   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:22:18.105484   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:22:18.105509   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:22:18.105531   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:22:18.106125   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:22:18.132592   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:22:18.173674   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:22:18.200455   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:22:18.223366   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:22:18.246242   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:22:18.269411   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:22:18.292157   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:22:18.314508   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:22:18.337365   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:22:18.353286   12265 ssh_runner.go:195] Run: openssl version
	I0916 10:22:18.358942   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:22:18.369103   12265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373299   12265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373346   12265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.378948   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:22:18.389436   12265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:22:18.393342   12265 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:22:18.393387   12265 kubeadm.go:392] StartCluster: {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:18.393452   12265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:22:18.393509   12265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:22:18.429056   12265 cri.go:89] found id: ""
	I0916 10:22:18.429118   12265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:22:18.439123   12265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:22:18.448797   12265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:22:18.458281   12265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:22:18.458303   12265 kubeadm.go:157] found existing configuration files:
	
	I0916 10:22:18.458357   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:22:18.467304   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:22:18.467373   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:22:18.476476   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:22:18.485402   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:22:18.485467   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:22:18.494643   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.503578   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:22:18.503657   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.512633   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:22:18.521391   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:22:18.521454   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:22:18.530381   12265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:22:18.584992   12265 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:22:18.585058   12265 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:22:18.700906   12265 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:22:18.701050   12265 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:22:18.701195   12265 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:22:18.712665   12265 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:22:18.808124   12265 out.go:235]   - Generating certificates and keys ...
	I0916 10:22:18.808238   12265 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:22:18.808308   12265 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:22:18.808390   12265 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:22:18.884612   12265 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:22:19.103481   12265 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:22:19.230175   12265 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:22:19.422850   12265 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:22:19.423077   12265 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.499430   12265 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:22:19.499746   12265 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.689533   12265 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:22:19.770560   12265 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:22:20.159783   12265 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:22:20.160053   12265 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:22:20.575897   12265 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:22:20.728566   12265 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:22:21.092038   12265 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:22:21.382957   12265 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:22:21.446452   12265 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:22:21.447068   12265 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:22:21.451577   12265 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:22:21.454426   12265 out.go:235]   - Booting up control plane ...
	I0916 10:22:21.454540   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:22:21.454614   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:22:21.454722   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:22:21.468531   12265 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:22:21.475700   12265 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:22:21.475767   12265 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:22:21.606009   12265 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:22:21.606143   12265 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:22:22.124369   12265 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.881759ms
	I0916 10:22:22.124492   12265 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:22:27.123389   12265 kubeadm.go:310] [api-check] The API server is healthy after 5.002163965s
	I0916 10:22:27.138636   12265 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:22:27.154171   12265 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:22:27.185604   12265 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:22:27.185839   12265 kubeadm.go:310] [mark-control-plane] Marking the node addons-001438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:22:27.198602   12265 kubeadm.go:310] [bootstrap-token] Using token: os1o8m.q16efzg2rjnkpln8
	I0916 10:22:27.199966   12265 out.go:235]   - Configuring RBAC rules ...
	I0916 10:22:27.200085   12265 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:22:27.209733   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:22:27.218630   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:22:27.222473   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:22:27.226151   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:22:27.230516   12265 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:22:27.529586   12265 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:22:27.967178   12265 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:22:28.529936   12265 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:22:28.529960   12265 kubeadm.go:310] 
	I0916 10:22:28.530028   12265 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:22:28.530044   12265 kubeadm.go:310] 
	I0916 10:22:28.530137   12265 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:22:28.530173   12265 kubeadm.go:310] 
	I0916 10:22:28.530227   12265 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:22:28.530307   12265 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:22:28.530390   12265 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:22:28.530397   12265 kubeadm.go:310] 
	I0916 10:22:28.530463   12265 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:22:28.530472   12265 kubeadm.go:310] 
	I0916 10:22:28.530525   12265 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:22:28.530537   12265 kubeadm.go:310] 
	I0916 10:22:28.530609   12265 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:22:28.530728   12265 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:22:28.530832   12265 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:22:28.530868   12265 kubeadm.go:310] 
	I0916 10:22:28.530981   12265 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:22:28.531080   12265 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:22:28.531091   12265 kubeadm.go:310] 
	I0916 10:22:28.531215   12265 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531358   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:22:28.531389   12265 kubeadm.go:310] 	--control-plane 
	I0916 10:22:28.531397   12265 kubeadm.go:310] 
	I0916 10:22:28.531518   12265 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:22:28.531528   12265 kubeadm.go:310] 
	I0916 10:22:28.531639   12265 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531783   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:22:28.532220   12265 kubeadm.go:310] W0916 10:22:18.568727     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532498   12265 kubeadm.go:310] W0916 10:22:18.569597     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532623   12265 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:22:28.532635   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.532642   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:28.534239   12265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:22:28.535682   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:22:28.547306   12265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:22:28.567029   12265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:22:28.567083   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:28.567116   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-001438 minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-001438 minikube.k8s.io/primary=true
	I0916 10:22:28.599898   12265 ops.go:34] apiserver oom_adj: -16
	I0916 10:22:28.718193   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.219097   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.718331   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.219213   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.718728   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.218997   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.719218   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.218543   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.335651   12265 kubeadm.go:1113] duration metric: took 3.768632423s to wait for elevateKubeSystemPrivileges
	I0916 10:22:32.335685   12265 kubeadm.go:394] duration metric: took 13.942299744s to StartCluster
	I0916 10:22:32.335709   12265 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.335851   12265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:22:32.336274   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.336491   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:22:32.336522   12265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:22:32.336653   12265 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:22:32.336724   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.336769   12265 addons.go:69] Setting default-storageclass=true in profile "addons-001438"
	I0916 10:22:32.336779   12265 addons.go:69] Setting ingress-dns=true in profile "addons-001438"
	I0916 10:22:32.336787   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-001438"
	I0916 10:22:32.336780   12265 addons.go:69] Setting ingress=true in profile "addons-001438"
	I0916 10:22:32.336793   12265 addons.go:69] Setting cloud-spanner=true in profile "addons-001438"
	I0916 10:22:32.336813   12265 addons.go:69] Setting inspektor-gadget=true in profile "addons-001438"
	I0916 10:22:32.336820   12265 addons.go:69] Setting gcp-auth=true in profile "addons-001438"
	I0916 10:22:32.336832   12265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-001438"
	I0916 10:22:32.336835   12265 addons.go:234] Setting addon cloud-spanner=true in "addons-001438"
	I0916 10:22:32.336828   12265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-001438"
	I0916 10:22:32.336844   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-001438"
	I0916 10:22:32.336825   12265 addons.go:234] Setting addon inspektor-gadget=true in "addons-001438"
	I0916 10:22:32.336853   12265 addons.go:69] Setting registry=true in profile "addons-001438"
	I0916 10:22:32.336867   12265 addons.go:234] Setting addon registry=true in "addons-001438"
	I0916 10:22:32.336883   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336888   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336896   12265 addons.go:69] Setting helm-tiller=true in profile "addons-001438"
	I0916 10:22:32.336908   12265 addons.go:234] Setting addon helm-tiller=true in "addons-001438"
	I0916 10:22:32.336937   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336940   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336844   12265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-001438"
	I0916 10:22:32.337250   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337262   12265 addons.go:69] Setting volcano=true in profile "addons-001438"
	I0916 10:22:32.337273   12265 addons.go:234] Setting addon volcano=true in "addons-001438"
	I0916 10:22:32.337281   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337295   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337315   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336808   12265 addons.go:234] Setting addon ingress=true in "addons-001438"
	I0916 10:22:32.337347   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337348   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337365   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337367   12265 addons.go:69] Setting volumesnapshots=true in profile "addons-001438"
	I0916 10:22:32.337379   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337381   12265 addons.go:234] Setting addon volumesnapshots=true in "addons-001438"
	I0916 10:22:32.337412   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336888   12265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:32.337442   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336769   12265 addons.go:69] Setting yakd=true in profile "addons-001438"
	I0916 10:22:32.337489   12265 addons.go:234] Setting addon yakd=true in "addons-001438"
	I0916 10:22:32.337633   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337660   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336835   12265 addons.go:69] Setting metrics-server=true in profile "addons-001438"
	I0916 10:22:32.337353   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337714   12265 addons.go:234] Setting addon metrics-server=true in "addons-001438"
	I0916 10:22:32.337741   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337700   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337795   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336844   12265 mustload.go:65] Loading cluster: addons-001438
	I0916 10:22:32.336824   12265 addons.go:69] Setting storage-provisioner=true in profile "addons-001438"
	I0916 10:22:32.337840   12265 addons.go:234] Setting addon storage-provisioner=true in "addons-001438"
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337881   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336804   12265 addons.go:234] Setting addon ingress-dns=true in "addons-001438"
	I0916 10:22:32.337251   12265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-001438"
	I0916 10:22:32.337944   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338072   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338099   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338127   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338301   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338331   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338413   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338421   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338448   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338455   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338446   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338765   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338792   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338818   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338845   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338995   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.339304   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.339363   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.342405   12265 out.go:177] * Verifying Kubernetes components...
	I0916 10:22:32.343665   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:32.357106   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0916 10:22:32.357444   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0916 10:22:32.357655   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0916 10:22:32.357797   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.357897   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358211   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358403   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358419   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358562   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358574   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358633   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0916 10:22:32.358790   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.358949   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358960   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.359007   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0916 10:22:32.369699   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.369748   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.369818   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370020   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370060   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370069   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370101   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370194   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370269   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370379   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.370390   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.370789   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370827   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370908   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370969   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.371094   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.371111   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.371475   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371508   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371573   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.371638   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371663   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371731   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.386697   12265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-001438"
	I0916 10:22:32.386747   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.386763   12265 addons.go:234] Setting addon default-storageclass=true in "addons-001438"
	I0916 10:22:32.386810   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.387114   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387173   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.387252   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387291   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.408433   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0916 10:22:32.409200   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.409836   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.409856   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.410249   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.410830   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.410872   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.411145   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0916 10:22:32.411578   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.413298   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.413319   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.414168   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0916 10:22:32.414190   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0916 10:22:32.414292   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0916 10:22:32.414570   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.414671   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.415178   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.415195   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.415681   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.416214   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.416252   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.416442   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0916 10:22:32.416592   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417197   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.417231   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.417415   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0916 10:22:32.417454   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417595   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.417608   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.417843   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417917   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418037   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.418050   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.418410   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.418443   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.418409   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418501   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.419031   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.419065   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.419266   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419281   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419404   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419414   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419702   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.419847   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.420545   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.421091   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.421133   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.421574   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.421979   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0916 10:22:32.422963   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.423382   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.423399   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.423697   12265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:22:32.423813   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.424320   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.424354   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.425846   12265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:22:32.425941   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 10:22:32.426062   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0916 10:22:32.426213   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0916 10:22:32.426381   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426757   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426931   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.426942   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.426976   12265 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:22:32.426992   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:22:32.427011   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.427391   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.427470   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.427489   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.427946   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.428354   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428385   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.428598   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.428889   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428924   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.429188   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.429202   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.429517   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.431934   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0916 10:22:32.431987   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432541   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.432563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432751   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.432883   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.432998   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.433120   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.433712   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.435531   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.435730   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435742   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.435888   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.435899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:32.435907   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435913   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.436070   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.436085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 10:22:32.436166   12265 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:22:32.440699   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0916 10:22:32.441072   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.441617   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.441644   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.441979   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.442497   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.442531   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.450769   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0916 10:22:32.451259   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.451700   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.451718   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.452549   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.453092   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.453146   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.454430   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0916 10:22:32.454743   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455061   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455149   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0916 10:22:32.455842   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455847   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455860   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455871   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455922   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.456243   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456542   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456622   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.456639   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.456747   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.457901   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0916 10:22:32.458037   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.458209   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.458254   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.458704   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.458721   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.459089   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.459296   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.459533   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.460121   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.460511   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.460545   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.460978   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0916 10:22:32.461180   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.461244   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.461735   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.461753   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.461805   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.462195   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0916 10:22:32.462331   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.462809   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.464034   12265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:22:32.464150   12265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:22:32.464278   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.464668   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.464696   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.465237   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.466010   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.465566   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0916 10:22:32.466246   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:22:32.466259   12265 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:22:32.466276   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467014   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.467145   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.467235   12265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:22:32.467359   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:22:32.467370   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:22:32.467385   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467696   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.467711   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.468100   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468326   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.468710   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:22:32.468725   12265 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:22:32.468742   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.468966   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:22:32.469146   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.469463   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.469917   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.469918   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.470000   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.470971   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0916 10:22:32.471473   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.471695   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.472001   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.472015   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.472269   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:22:32.472471   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472523   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0916 10:22:32.472664   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472783   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.472993   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.473106   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.473134   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.473329   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.473377   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.473597   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.473743   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.473790   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.473851   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.474147   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:32.474163   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:22:32.474178   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.474793   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.474941   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.474955   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.475234   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.475510   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.475619   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475650   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.475665   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475824   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.476100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.476264   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.476604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.476644   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.476828   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.476940   12265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:22:32.477612   12265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:22:32.478260   12265 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.478276   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:22:32.478291   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.478585   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.478604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.478880   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.479035   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.479168   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.479299   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.479921   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.479937   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:22:32.479951   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.480259   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.480742   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.481958   12265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:22:32.482834   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0916 10:22:32.482998   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483118   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483310   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.483473   12265 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:22:32.483494   12265 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:22:32.483512   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.483802   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.483828   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.483888   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483903   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483899   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483930   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.484092   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.484159   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484194   12265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:22:32.484411   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.484581   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.484636   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484681   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.484892   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.484958   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.485096   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.485218   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.485248   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.485262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.485372   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.485494   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.485505   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:22:32.485519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.485781   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.486028   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.486181   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.486318   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.487186   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487422   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.487675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.487695   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487742   12265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.487752   12265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:22:32.487764   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.487810   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.487995   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.488225   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.488378   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.489702   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490168   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.490188   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490394   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.490571   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.490713   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.490823   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.492068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492458   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.492479   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492686   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.492906   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.492915   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0916 10:22:32.493044   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.493239   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.493450   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.493933   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.493950   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.494562   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.494891   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.496932   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.498147   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0916 10:22:32.498828   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:22:32.499232   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.499608   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.499634   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.499936   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.500124   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.500215   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:22:32.500241   12265 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:22:32.500262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.501763   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.503323   12265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:22:32.503738   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504260   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.504287   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504422   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.504578   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.504721   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.504800   12265 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:32.504813   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:22:32.504828   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.504844   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.507073   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0916 10:22:32.507489   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.507971   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.507994   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.508014   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 10:22:32.508383   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.508455   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0916 10:22:32.508996   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.509012   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509054   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509082   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509517   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.509552   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.509551   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.509573   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509882   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510086   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.510151   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.510169   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.510570   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.510576   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510696   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.510739   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.510822   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.510947   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.511685   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.511711   12265 retry.go:31] will retry after 323.390168ms: ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.513110   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.513548   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.515216   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:22:32.516467   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:22:32.517228   12265 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:22:32.518463   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:22:32.519691   12265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:22:32.521193   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:22:32.521287   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:32.521309   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:22:32.521330   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.523957   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:22:32.524563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.524915   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.524939   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.525078   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.525271   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.525408   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.525548   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.526174   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526199   12265 retry.go:31] will retry after 208.869548ms: ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526327   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:22:32.527568   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:22:32.528811   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:22:32.530140   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:22:32.530154   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:22:32.530169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.533281   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533666   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.533688   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533886   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.534072   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.534227   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.534367   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.700911   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:32.700984   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:22:32.785482   12265 node_ready.go:35] waiting up to 6m0s for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822842   12265 node_ready.go:49] node "addons-001438" has status "Ready":"True"
	I0916 10:22:32.822881   12265 node_ready.go:38] duration metric: took 37.361645ms for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822895   12265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:32.861506   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:22:32.861543   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:22:32.862634   12265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:32.929832   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.943014   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.952437   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.991347   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.995067   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:22:32.995096   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:22:33.036627   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:22:33.036657   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:22:33.036890   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:33.060821   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:22:33.060843   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:22:33.069120   12265 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:22:33.069156   12265 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:22:33.070018   12265 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:22:33.070038   12265 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:22:33.073512   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:22:33.073535   12265 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:22:33.137058   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:22:33.137088   12265 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:22:33.226855   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.226884   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:22:33.270492   12265 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:22:33.270513   12265 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:22:33.316169   12265 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.316195   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:22:33.316355   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:22:33.316373   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:22:33.316509   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:22:33.316522   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:22:33.327110   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:22:33.327126   12265 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:22:33.354597   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.420390   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:33.435680   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:22:33.435717   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:22:33.439954   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:22:33.439978   12265 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:22:33.444981   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.445002   12265 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:22:33.522524   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:33.536060   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:22:33.536089   12265 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:22:33.569830   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.590335   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:22:33.590366   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:22:33.601121   12265 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:22:33.601154   12265 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:22:33.623197   12265 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.623219   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:22:33.629904   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.693404   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.693424   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:22:33.747802   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.761431   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:22:33.761461   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:22:33.774811   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:22:33.774845   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:22:33.825893   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.895859   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:22:33.895893   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:22:34.018321   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:22:34.018345   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:22:34.260751   12265 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:22:34.260776   12265 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:22:34.288705   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:22:34.288733   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:22:34.575904   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:22:34.575932   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:22:34.578707   12265 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:34.578728   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:22:34.872174   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:35.002110   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:22:35.002133   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:22:35.053333   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47211504s)
	I0916 10:22:35.173178   12265 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.243289168s)
	I0916 10:22:35.173338   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173355   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.173706   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:35.173723   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.173737   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.173751   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173762   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.174037   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.174053   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.219712   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.219745   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.220033   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.220084   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.326225   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:22:35.326245   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:22:35.667079   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:35.667102   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:22:35.677467   12265 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-001438" context rescaled to 1 replicas
	I0916 10:22:36.005922   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:36.880549   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:37.248962   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.296492058s)
	I0916 10:22:37.249022   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249036   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.306004364s)
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.257675255s)
	I0916 10:22:37.249138   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249160   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249084   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249221   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249330   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249355   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249374   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249434   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249460   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249476   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249440   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249499   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249529   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249541   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249485   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249593   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249655   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249676   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251028   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251216   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251214   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251232   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251278   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251288   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:38.978538   12265 pod_ready.go:93] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:38.978561   12265 pod_ready.go:82] duration metric: took 6.115904528s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:38.978572   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179661   12265 pod_ready.go:93] pod "kube-apiserver-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.179691   12265 pod_ready.go:82] duration metric: took 201.112317ms for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179705   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377607   12265 pod_ready.go:93] pod "kube-controller-manager-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.377640   12265 pod_ready.go:82] duration metric: took 197.926831ms for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377656   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509747   12265 pod_ready.go:93] pod "kube-proxy-66flj" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.509775   12265 pod_ready.go:82] duration metric: took 132.110984ms for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509789   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633441   12265 pod_ready.go:93] pod "kube-scheduler-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.633475   12265 pod_ready.go:82] duration metric: took 123.676997ms for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633487   12265 pod_ready.go:39] duration metric: took 6.810577473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:39.633508   12265 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:22:39.633572   12265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:22:39.633966   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:22:39.634003   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:39.637511   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638022   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:39.638050   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638265   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:39.638449   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:39.638594   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:39.638741   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:40.248183   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:22:40.342621   12265 addons.go:234] Setting addon gcp-auth=true in "addons-001438"
	I0916 10:22:40.342682   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:40.343054   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.343105   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.358807   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0916 10:22:40.359276   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.359793   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.359818   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.360152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.360750   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.360794   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.375531   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0916 10:22:40.375999   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.376410   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.376431   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.376712   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.376880   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:40.378466   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:40.378706   12265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:22:40.378736   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:40.381488   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.381978   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:40.381997   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.382162   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:40.382374   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:40.382527   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:40.382728   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:41.185716   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.148787276s)
	I0916 10:22:41.185775   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185787   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185792   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.831162948s)
	I0916 10:22:41.185821   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185842   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185899   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.76548291s)
	I0916 10:22:41.185927   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185929   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.663383888s)
	I0916 10:22:41.185940   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185948   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185957   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186031   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616165984s)
	I0916 10:22:41.186072   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186084   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186162   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.55623363s)
	I0916 10:22:41.186179   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186188   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186223   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186233   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186246   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186249   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186272   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186279   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186321   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.438489786s)
	W0916 10:22:41.186349   12265 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186370   12265 retry.go:31] will retry after 282.502814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186323   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186452   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.360528333s)
	I0916 10:22:41.186474   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186483   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186530   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186552   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186580   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186592   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.133220852s)
	I0916 10:22:41.186602   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186608   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186609   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186627   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186684   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186691   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186698   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186704   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186797   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186826   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186833   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186851   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186871   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186884   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186893   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186901   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186907   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186936   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186943   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186990   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186999   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187006   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187013   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.187860   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.187892   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.187899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187912   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.188173   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.188191   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188200   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188204   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188209   12265 addons.go:475] Verifying addon metrics-server=true in "addons-001438"
	I0916 10:22:41.188211   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188241   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188250   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188259   12265 addons.go:475] Verifying addon ingress=true in "addons-001438"
	I0916 10:22:41.190004   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190036   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190042   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190099   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190137   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190141   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190152   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190155   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190159   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.190162   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190167   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.190170   12265 addons.go:475] Verifying addon registry=true in "addons-001438"
	I0916 10:22:41.190534   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190570   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190579   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.191944   12265 out.go:177] * Verifying registry addon...
	I0916 10:22:41.191953   12265 out.go:177] * Verifying ingress addon...
	I0916 10:22:41.192858   12265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-001438 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:22:41.245022   12265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:22:41.245042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:41.245048   12265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:22:41.245062   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.270906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.270924   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.271190   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.271210   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.469044   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:41.699366   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.699576   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.200823   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.201220   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.707853   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.708238   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.062276   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.056308906s)
	I0916 10:22:43.062328   12265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.428733709s)
	I0916 10:22:43.062359   12265 api_server.go:72] duration metric: took 10.72580389s to wait for apiserver process to appear ...
	I0916 10:22:43.062372   12265 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:22:43.062397   12265 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0916 10:22:43.062411   12265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.683683571s)
	I0916 10:22:43.062334   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062455   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.062799   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:43.062819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.062830   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.062838   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062846   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.063060   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.063085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.063094   12265 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:43.064955   12265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:22:43.065015   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:43.066605   12265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:22:43.067509   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:22:43.067847   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:22:43.067859   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:22:43.093271   12265 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0916 10:22:43.093820   12265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:22:43.093839   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.095011   12265 api_server.go:141] control plane version: v1.31.1
	I0916 10:22:43.095033   12265 api_server.go:131] duration metric: took 32.653755ms to wait for apiserver health ...
	I0916 10:22:43.095043   12265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:22:43.123828   12265 system_pods.go:59] 19 kube-system pods found
	I0916 10:22:43.123858   12265 system_pods.go:61] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.123864   12265 system_pods.go:61] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.123871   12265 system_pods.go:61] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.123876   12265 system_pods.go:61] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.123883   12265 system_pods.go:61] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.123886   12265 system_pods.go:61] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.123903   12265 system_pods.go:61] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.123906   12265 system_pods.go:61] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.123913   12265 system_pods.go:61] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.123917   12265 system_pods.go:61] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.123923   12265 system_pods.go:61] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.123928   12265 system_pods.go:61] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.123935   12265 system_pods.go:61] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.123943   12265 system_pods.go:61] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.123948   12265 system_pods.go:61] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.123955   12265 system_pods.go:61] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123960   12265 system_pods.go:61] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123967   12265 system_pods.go:61] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.123972   12265 system_pods.go:61] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.123980   12265 system_pods.go:74] duration metric: took 28.931422ms to wait for pod list to return data ...
	I0916 10:22:43.123988   12265 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:22:43.137057   12265 default_sa.go:45] found service account: "default"
	I0916 10:22:43.137084   12265 default_sa.go:55] duration metric: took 13.088793ms for default service account to be created ...
	I0916 10:22:43.137095   12265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:22:43.166020   12265 system_pods.go:86] 19 kube-system pods found
	I0916 10:22:43.166054   12265 system_pods.go:89] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.166063   12265 system_pods.go:89] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.166075   12265 system_pods.go:89] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.166088   12265 system_pods.go:89] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.166100   12265 system_pods.go:89] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.166108   12265 system_pods.go:89] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.166118   12265 system_pods.go:89] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.166126   12265 system_pods.go:89] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.166136   12265 system_pods.go:89] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.166145   12265 system_pods.go:89] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.166154   12265 system_pods.go:89] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.166164   12265 system_pods.go:89] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.166171   12265 system_pods.go:89] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.166178   12265 system_pods.go:89] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.166183   12265 system_pods.go:89] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.166199   12265 system_pods.go:89] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166207   12265 system_pods.go:89] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166217   12265 system_pods.go:89] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.166224   12265 system_pods.go:89] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.166231   12265 system_pods.go:126] duration metric: took 29.130167ms to wait for k8s-apps to be running ...
	I0916 10:22:43.166241   12265 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:22:43.166284   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:22:43.202957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.204822   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:43.205240   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:22:43.205259   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:22:43.339484   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.339511   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:22:43.533725   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.574829   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.701096   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.702516   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.074326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.199962   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.201086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:44.420432   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.951340242s)
	I0916 10:22:44.420484   12265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.25416987s)
	I0916 10:22:44.420496   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.420512   12265 system_svc.go:56] duration metric: took 1.254267923s WaitForService to wait for kubelet
	I0916 10:22:44.420530   12265 kubeadm.go:582] duration metric: took 12.083973387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:22:44.420555   12265 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:22:44.420516   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.420960   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.420998   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421011   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.421019   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.421041   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.421242   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.421289   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421306   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.432407   12265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:22:44.432433   12265 node_conditions.go:123] node cpu capacity is 2
	I0916 10:22:44.432443   12265 node_conditions.go:105] duration metric: took 11.883273ms to run NodePressure ...
	I0916 10:22:44.432454   12265 start.go:241] waiting for startup goroutines ...
	I0916 10:22:44.573423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.701968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.702167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.087788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.175284   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.64151941s)
	I0916 10:22:45.175340   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175356   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175638   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175658   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175667   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175675   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175907   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175959   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175966   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:45.176874   12265 addons.go:475] Verifying addon gcp-auth=true in "addons-001438"
	I0916 10:22:45.179151   12265 out.go:177] * Verifying gcp-auth addon...
	I0916 10:22:45.181042   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:22:45.204765   12265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:22:45.204788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.240576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.244884   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.572763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.684678   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.699294   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.700332   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.071926   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.184345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.198555   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.198584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.572691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.686213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.698404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.699290   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.073014   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.184892   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.199176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.199412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.573319   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.685117   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.698854   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.699042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.080702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.186042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.198652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:48.198985   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.572136   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.684922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.698643   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.698805   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.072263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.186126   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.198845   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.201291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.571909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.686134   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.699485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.699837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.072013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.185475   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.198803   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:50.198988   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.572410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.684716   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.698643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.698842   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.072735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.185327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.198402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.198563   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.574099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.684301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.698582   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.699135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.073280   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.184410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.197628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.197951   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.573111   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.685463   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.698350   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.698445   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.073318   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.185032   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.198371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.198982   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.572652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.684593   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.698434   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.699099   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.071466   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.184978   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.199125   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:54.199475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.684904   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.699578   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.700868   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.072026   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.186696   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.199421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.200454   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:55.811368   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.811883   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.811882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.812044   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.073000   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.197552   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.571945   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.684725   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.698164   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.698871   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.078099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.187093   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.198266   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.198788   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.572608   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.685182   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.698112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.698451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.072438   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.184226   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.197871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:58.199176   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.573655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.688012   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.698890   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.699498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.072908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.197825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.198094   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:59.572578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.685886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.699165   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.699539   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.072677   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.185334   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.198436   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.572620   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.684676   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.698184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.698937   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.368315   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.368647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:01.368662   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.369057   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.577610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.685792   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.699073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.700679   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.073297   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.184780   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.198423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.198632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.573860   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.688317   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.699137   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.699189   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.073268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.185286   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.197706   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:03.199446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.575016   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.688681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.697852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.699284   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.072561   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.184550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.198183   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.198692   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.573058   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.684410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.698448   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.699101   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.073082   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.198422   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.199510   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.572901   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.685013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.698419   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.699052   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.072680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.184899   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.199400   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.199960   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.573550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.698176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.386744   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.389015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:07.389529   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.391740   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.572440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.685517   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.699276   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.699495   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.073598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.185305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.198307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.198701   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.572936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.685042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.697898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.699045   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.073524   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.185170   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.197444   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.198282   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:09.571947   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.685269   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.700263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.700289   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.072367   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.184140   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.198279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.198501   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.571995   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.684443   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.698621   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.699212   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.072647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.184997   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.198336   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.199743   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.572138   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.684642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.697735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.698012   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.072087   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.184730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.198825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.199117   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.574471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.697610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.697875   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.074276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.200283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:13.200511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.572643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.687229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.700375   12265 kapi.go:107] duration metric: took 32.506622173s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:13.700476   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.073345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.185359   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.197920   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.714386   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.714848   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.072480   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.184006   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.198907   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.571536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.686990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.698314   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.072850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.397705   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.398059   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.571699   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.687893   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.701822   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.073078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.185433   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.202339   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.572915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.684909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.698215   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.071875   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.185548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.198104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.572180   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.684990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.698912   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.072105   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.184341   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.197977   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.571740   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.685205   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.698214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.071811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.184927   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.198225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.572184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.684471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.697550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.072526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.185439   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.198086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.573843   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.684530   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.699027   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.071583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.185751   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.201330   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.574078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.688728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.700516   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.072848   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.184719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.571975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.697845   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.071885   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.199755   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.209742   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.572903   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.684095   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.697255   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.072405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.185096   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.197451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.572250   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.685603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.699421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.072277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.197948   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.572954   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.684305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.698018   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.072121   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.186632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.198260   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.571710   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.685260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.697569   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.072712   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.185404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.197839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.572506   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.685719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.698390   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.073440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.185211   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.198135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.572871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.684795   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.698442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.074307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.184391   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.198195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.571684   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.686595   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.697829   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.072882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.184355   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.197913   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.572796   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.685340   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.697838   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.072358   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.185072   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.198841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.572260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.685619   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.697923   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.072329   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.184923   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.198461   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.572531   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.684886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.698221   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.071922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.184896   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.198347   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.572508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.685674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.698172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.072040   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.184401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.198192   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.571685   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.684934   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.699442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.072917   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.184575   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.197989   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.572782   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.685224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.697515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.073347   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.184633   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.198109   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.572239   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.684842   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.698412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.072639   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.184377   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.197723   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.572964   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.684944   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.698216   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.071865   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.184322   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.197583   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.572728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.697663   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.073346   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.184763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.198338   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.572748   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.688546   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.698337   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.072528   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.184742   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.197991   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.572832   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.685275   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.697957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.072948   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.185237   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.198222   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.572150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.685770   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.698107   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.072508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.198122   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.571791   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.685476   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.698021   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.072455   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.198450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.685519   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.698088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.073394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.184852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.198932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.572905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.685024   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.699000   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.072804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.185568   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.198040   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.571961   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.684879   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.698104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.071779   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.184794   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.198431   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.572786   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.685048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.701841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.072550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.184915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.198725   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.572850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.684405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.697953   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.075719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.185584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.198034   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.572642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.685074   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.697421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.072216   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.184736   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.198614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.572675   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.685508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.697632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.072878   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.185267   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.197508   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.684680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.698038   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.072225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.184256   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.197802   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.685760   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.699050   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.072698   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.185139   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.197417   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.572526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.684976   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.698186   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.071987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.184373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.197898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.573326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.685154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.699596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.071975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.184301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.197532   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.573068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.684535   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.698262   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.071830   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.185558   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.198149   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.684135   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.697614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.109030   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.216004   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.216775   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.572732   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.684811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.697899   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.071691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.198291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.572185   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.685478   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.698240   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.072727   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.185578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.207485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.684402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.698565   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.072447   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.192764   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.206954   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.573224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.685091   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.697997   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.071906   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.184428   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.197550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.572498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.685525   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.702647   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.072504   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.185219   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.197512   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.573858   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.685938   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.699556   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.080160   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.188056   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.197615   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.575213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.685187   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.697887   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.072585   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.185321   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.577876   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.685259   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.698763   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.073356   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.184332   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.197676   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.574632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.705119   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.705797   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.073702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.190460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.199492   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.573521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.685468   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.697671   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.074427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.211989   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.214167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.573479   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.684919   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.698441   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.184827   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.573401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.685277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.698457   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.072421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.184959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.198365   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.572446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.685036   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.697443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.072489   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.185143   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.197711   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.572704   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.685206   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.697839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.073656   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.185083   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.197443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.572739   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.685343   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.697853   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.072697   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.185630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.197928   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.572344   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.684814   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.698225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.073324   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.185254   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.198404   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.571987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.684709   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.698073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.072174   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.184688   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.198078   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.571798   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.685576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.698188   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.072810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.184683   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.198053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.574408   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.698415   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.072047   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.185423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.198010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.572968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.684217   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.697876   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.073276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.185372   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.197865   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.572327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.684929   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.698146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.073068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.185261   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.197596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.684379   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.697450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.072646   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.184810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.198157   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.684635   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.698108   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.073055   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.185325   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.572951   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.684268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.697542   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.073300   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.184458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.198058   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.571882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.684389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.698491   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.185150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.198444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.572557   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.686730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.697987   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.072389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.184902   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.198815   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.572090   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.684279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.072655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.185118   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.197515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.573029   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.684503   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.697942   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.073161   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.185394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.197824   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.572789   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.685536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.072248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.184713   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.198206   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.572681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.685404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.697732   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.073033   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.186532   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.197932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.573166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.684900   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.698494   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.072840   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.185112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.199554   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.573533   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.685513   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.698631   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.073563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.184668   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.198960   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.573373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.684371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.698226   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.072380   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.184889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.572427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.685015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.699053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.073225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.185241   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.198172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.572019   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.697511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.072382   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.185154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.198590   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.572333   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.688804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.699195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.072971   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.184395   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.197840   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.572457   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.684935   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.698247   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.072201   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.184817   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.198299   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.572603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.684807   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.698764   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.079460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.184783   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.198219   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.572155   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.684462   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.698249   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.071889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.185035   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.198639   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.572607   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.684993   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.698317   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.073167   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.187630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.197861   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.684449   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.698084   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.072598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.184553   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.198241   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.572543   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.685061   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.698066   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.072888   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.184279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.198475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.572908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.684166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.699214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.071396   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.185054   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.197274   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.571831   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.683617   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.073753   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.184818   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.198303   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.572754   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.685078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.697801   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.074144   12265 kapi.go:107] duration metric: took 1m59.00663205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:42.185287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.197975   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.685826   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.698484   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.185521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.197894   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.684695   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.698444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.184270   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.198072   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.686127   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.697760   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.184583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.197892   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.685284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.698273   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.197597   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.684852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.698234   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.185674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.197778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.684803   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.698286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.185195   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.197536   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.684936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.698202   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.185940   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.198354   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.685628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.698017   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.184172   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.197513   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.684563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.699121   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.185458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.197627   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.684548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.697728   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.184587   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.198088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.687284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.697762   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.185441   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.684856   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.698392   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.184966   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.198309   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.685059   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.697818   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.184799   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.199146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.685287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.697823   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.184982   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.198778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.684629   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.698010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.185306   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.197805   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.686354   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.697789   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.184048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.198685   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.685283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.697967   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.185357   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.198462   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.685857   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.698582   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.185027   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.199070   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.685248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.697584   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.444242   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.542180   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.684941   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.698345   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.184494   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.199673   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.686844   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.701197   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.186108   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.200286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.935418   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.936940   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.185837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.198343   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.685229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.697687   12265 kapi.go:107] duration metric: took 2m23.503933898s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:05.184162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.686162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.184784   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.685596   12265 kapi.go:107] duration metric: took 2m21.504550895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:06.687290   12265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-001438 cluster.
	I0916 10:25:06.688726   12265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:06.689940   12265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:06.691195   12265 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:06.692654   12265 addons.go:510] duration metric: took 2m34.356008246s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:06.692692   12265 start.go:246] waiting for cluster config update ...
	I0916 10:25:06.692714   12265 start.go:255] writing updated cluster config ...
	I0916 10:25:06.692960   12265 ssh_runner.go:195] Run: rm -f paused
	I0916 10:25:06.701459   12265 out.go:177] * Done! kubectl is now configured to use "addons-001438" cluster and "default" namespace by default
	E0916 10:25:06.702711   12265 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.234023243Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba.5MPBU2\"" file="server/server.go:805"
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.235280299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6413439-f748-47d4-95f5-06ddd60c2af1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.235421545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6413439-f748-47d4-95f5-06ddd60c2af1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.236924811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=115f4e2e-3832-476a-ace5-9de970b7e58c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.238532631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482670238468289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=115f4e2e-3832-476a-ace5-9de970b7e58c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.239131590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d0c2587-d5ba-40c1-ad17-d3f32b6b8cbc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.239201199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d0c2587-d5ba-40c1-ad17-d3f32b6b8cbc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.240266033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.contai
ner.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1
726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaa
a5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d0c2587-d5ba-40c1-ad17-d3f32b6b8cbc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.261950171Z" level=debug msg="Unmounted container 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba" file="storage/runtime.go:495" id=c2950930-4ef9-4b3c-a94f-afd5c406746c name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.283588948Z" level=debug msg="Found exit code for 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba: 0" file="oci/runtime_oci.go:1022" id=c2950930-4ef9-4b3c-a94f-afd5c406746c name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.292020595Z" level=debug msg="Found exit code for 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba: 0" file="oci/runtime_oci.go:1022"
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.295994584Z" level=info msg="Stopped container 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba: kube-system/metrics-server-84c5f94fbc-9hj9f/metrics-server" file="server/container_stop.go:29" id=c2950930-4ef9-4b3c-a94f-afd5c406746c name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.296158989Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=c2950930-4ef9-4b3c-a94f-afd5c406746c name=/runtime.v1.RuntimeService/StopContainer
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.296089053Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\"" file="server/server.go:805"
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.296990062Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,}" file="otel-collector/interceptors.go:62" id=1941218f-86fe-4063-b2b2-7c61fd8a77f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.297046692Z" level=info msg="Stopping pod sandbox: 8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90" file="server/sandbox_stop.go:18" id=1941218f-86fe-4063-b2b2-7c61fd8a77f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.297403583Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-9hj9f Namespace:kube-system ID:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90 UID:76382ab7-9b7a-4bd6-b19c-7a77ba051f1d NetNS:/var/run/netns/6cacb935-f7a7-4e44-8318-fe673ac6eec8 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod76382ab7-9b7a-4bd6-b19c-7a77ba051f1d PodAnnotations:0xc000e4c130}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.297650541Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-9hj9f from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.310615579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea291f55-0018-483e-b3b4-2fdf21893ec1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.310700419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea291f55-0018-483e-b3b4-2fdf21893ec1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.311837545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7b59240-a8b7-42c5-8416-ed7dbe721c98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.313069756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482670313044955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7b59240-a8b7-42c5-8416-ed7dbe721c98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.313665029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b25885b6-b7bc-4ef2-a7c1-178a76fdf4a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.313722819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b25885b6-b7bc-4ef2-a7c1-178a76fdf4a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:10 addons-001438 crio[662]: time="2024-09-16 10:31:10.314182554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_EXITED,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.contain
er.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:17
26482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa
5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b25885b6-b7bc-4ef2-a7c1-178a76fdf4a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c0c62d19fc341       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 6 minutes ago       Running             gcp-auth                                 0                   81638f0641649       gcp-auth-89d5ffd79-jg5wz
	4d9f00ee52087       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             6 minutes ago       Running             controller                               0                   f0a70a6b5b4fa       ingress-nginx-controller-bc57996ff-jhd4w
	a4ff4f2e6c350       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	fa45fa1d889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	112e37da6f1b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	bcd9404de3e14       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	26165c7625a62       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	35e24c1abefe7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   bf02d50932f14       csi-hostpath-resizer-0
	a5edaf3e2dd3d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	b8ebd2f050729       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   f375334740e2f       csi-hostpath-attacher-0
	0d52d2269e100       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             7 minutes ago       Exited              patch                                    1                   6fe91ac2288fe       ingress-nginx-admission-patch-rls9n
	54c4347a1fc2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   7 minutes ago       Exited              create                                   0                   d66b1317412a7       ingress-nginx-admission-create-dk6l8
	f0bde3324c47d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   0eef20d1c6813       snapshot-controller-56fcc65765-pv2sr
	f786c20ceffe3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   ec33782f42717       snapshot-controller-56fcc65765-8nq94
	d997d75b48ee4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   173b48ab2ab7f       local-path-provisioner-86d989889c-rj67m
	0024bbca27aac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        7 minutes ago       Exited              metrics-server                           0                   8bcb0a4a20a5a       metrics-server-84c5f94fbc-9hj9f
	8193aad1beb5b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             8 minutes ago       Running             minikube-ingress-dns                     0                   f1a3772ce5f7d       kube-ingress-dns-minikube
	20d2f3360f320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   748d363148f66       storage-provisioner
	63d270cbed8d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             8 minutes ago       Running             coredns                                  0                   42b8586a7b29a       coredns-7c65d6cfc9-j5ndn
	60269ac0552c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             8 minutes ago       Running             kube-proxy                               0                   2bf9dc368debd       kube-proxy-66flj
	1aabe5cb48f97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             8 minutes ago       Running             etcd                                     0                   f7aeaa11c7f4c       etcd-addons-001438
	2d34a4e3596c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             8 minutes ago       Running             kube-controller-manager                  0                   8a68216be6dee       kube-controller-manager-addons-001438
	bfff5b2d37985       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             8 minutes ago       Running             kube-apiserver                           0                   81f095a38dae1       kube-apiserver-addons-001438
	5a4816dc33e76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             8 minutes ago       Running             kube-scheduler                           0                   ec134844260ab       kube-scheduler-addons-001438
	
	
	==> coredns [63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce] <==
	[INFO] 127.0.0.1:32820 - 49588 "HINFO IN 5683833228926934535.5808779734602365342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027869673s
	[INFO] 10.244.0.7:47242 - 15842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000350783s
	[INFO] 10.244.0.7:47242 - 29412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155576s
	[INFO] 10.244.0.7:51495 - 23321 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115255s
	[INFO] 10.244.0.7:51495 - 47135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085371s
	[INFO] 10.244.0.7:40689 - 10301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114089s
	[INFO] 10.244.0.7:40689 - 30779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011843s
	[INFO] 10.244.0.7:53526 - 19539 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127604s
	[INFO] 10.244.0.7:53526 - 34381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109337s
	[INFO] 10.244.0.7:39182 - 43658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075802s
	[INFO] 10.244.0.7:39182 - 55433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031766s
	[INFO] 10.244.0.7:52628 - 35000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037386s
	[INFO] 10.244.0.7:52628 - 44218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027585s
	[INFO] 10.244.0.7:47656 - 61837 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028204s
	[INFO] 10.244.0.7:47656 - 39571 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027731s
	[INFO] 10.244.0.7:53964 - 36235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098663s
	[INFO] 10.244.0.7:53964 - 55690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045022s
	[INFO] 10.244.0.22:49146 - 11336 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543634s
	[INFO] 10.244.0.22:44900 - 51750 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125434s
	[INFO] 10.244.0.22:47266 - 27362 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158517s
	[INFO] 10.244.0.22:53077 - 63050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068888s
	[INFO] 10.244.0.22:52796 - 34381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101059s
	[INFO] 10.244.0.22:52167 - 15594 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126468s
	[INFO] 10.244.0.22:42107 - 54869 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149176s
	[INFO] 10.244.0.22:60865 - 20616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006078154s
	
	
	==> describe nodes <==
	Name:               addons-001438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-001438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-001438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-001438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-001438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-001438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:31:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-001438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b69a913a20a4259950d0bf801229c28
	  System UUID:                8b69a913-a20a-4259-950d-0bf801229c28
	  Boot ID:                    7d21de27-dd4e-4002-9fc0-df14a0ff761f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-jg5wz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jhd4w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m30s
	  kube-system                 coredns-7c65d6cfc9-j5ndn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m37s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 csi-hostpathplugin-xgk62                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 etcd-addons-001438                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m43s
	  kube-system                 kube-apiserver-addons-001438                250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-controller-manager-addons-001438       200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-proxy-66flj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 kube-scheduler-addons-001438                100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 snapshot-controller-56fcc65765-8nq94        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 snapshot-controller-56fcc65765-pv2sr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  local-path-storage          local-path-provisioner-86d989889c-rj67m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jnpkm              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     8m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m34s  kube-proxy       
	  Normal  Starting                 8m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m42s  kubelet          Node addons-001438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s  kubelet          Node addons-001438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m42s  kubelet          Node addons-001438 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m41s  kubelet          Node addons-001438 status is now: NodeReady
	  Normal  RegisteredNode           8m38s  node-controller  Node addons-001438 event: Registered Node addons-001438 in Controller
	
	
	==> dmesg <==
	[  +4.002627] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.196359] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +0.061696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999876] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.091472] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.774952] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +1.497885] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.466780] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.018877] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.254117] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.876932] kauditd_printk_skb: 7 callbacks suppressed
	[ +33.888489] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.263650] kauditd_printk_skb: 76 callbacks suppressed
	[ +48.109785] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 10:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.297596] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.818881] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.121137] kauditd_printk_skb: 19 callbacks suppressed
	[ +29.616490] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.276540] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:27] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 10:31] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84] <==
	{"level":"info","ts":"2024-09-16T10:25:01.423722Z","caller":"traceutil/trace.go:171","msg":"trace[1526018823] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"284.258855ms","start":"2024-09-16T10:25:01.139452Z","end":"2024-09-16T10:25:01.423711Z","steps":["trace[1526018823] 'process raft request'  (duration: 284.165558ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.424593Z","caller":"traceutil/trace.go:171","msg":"trace[1620023283] linearizableReadLoop","detail":"{readStateIndex:1296; appliedIndex:1296; }","duration":"253.838283ms","start":"2024-09-16T10:25:01.170745Z","end":"2024-09-16T10:25:01.424583Z","steps":["trace[1620023283] 'read index received'  (duration: 253.835456ms)","trace[1620023283] 'applied index is now lower than readState.Index'  (duration: 2.263µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:01.424681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.948565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.424719Z","caller":"traceutil/trace.go:171","msg":"trace[1658095100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"253.992891ms","start":"2024-09-16T10:25:01.170719Z","end":"2024-09-16T10:25:01.424712Z","steps":["trace[1658095100] 'agreement among raft nodes before linearized reading'  (duration: 253.933158ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.430878Z","caller":"traceutil/trace.go:171","msg":"trace[196824448] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"219.615242ms","start":"2024-09-16T10:25:01.211190Z","end":"2024-09-16T10:25:01.430805Z","steps":["trace[196824448] 'process raft request'  (duration: 217.799649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.432286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.218738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.432549Z","caller":"traceutil/trace.go:171","msg":"trace[1250515915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"248.433899ms","start":"2024-09-16T10:25:01.183901Z","end":"2024-09-16T10:25:01.432335Z","steps":["trace[1250515915] 'agreement among raft nodes before linearized reading'  (duration: 246.789324ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917472Z","caller":"traceutil/trace.go:171","msg":"trace[1132617141] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"256.411132ms","start":"2024-09-16T10:25:03.661047Z","end":"2024-09-16T10:25:03.917458Z","steps":["trace[1132617141] 'read index received'  (duration: 256.216658ms)","trace[1132617141] 'applied index is now lower than readState.Index'  (duration: 193.873µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:03.917646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.564415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917689Z","caller":"traceutil/trace.go:171","msg":"trace[1681803764] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1254; }","duration":"256.635309ms","start":"2024-09-16T10:25:03.661043Z","end":"2024-09-16T10:25:03.917678Z","steps":["trace[1681803764] 'agreement among raft nodes before linearized reading'  (duration: 256.524591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.498369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917721Z","caller":"traceutil/trace.go:171","msg":"trace[320039730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"246.52737ms","start":"2024-09-16T10:25:03.671187Z","end":"2024-09-16T10:25:03.917715Z","steps":["trace[320039730] 'agreement among raft nodes before linearized reading'  (duration: 246.484981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.395252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917834Z","caller":"traceutil/trace.go:171","msg":"trace[699037525] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"461.96825ms","start":"2024-09-16T10:25:03.455860Z","end":"2024-09-16T10:25:03.917828Z","steps":["trace[699037525] 'process raft request'  (duration: 461.454179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917838Z","caller":"traceutil/trace.go:171","msg":"trace[618256897] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"234.40851ms","start":"2024-09-16T10:25:03.683425Z","end":"2024-09-16T10:25:03.917833Z","steps":["trace[618256897] 'agreement among raft nodes before linearized reading'  (duration: 234.386479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:03.455845Z","time spent":"462.003063ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.523876Z","caller":"traceutil/trace.go:171","msg":"trace[565706559] transaction","detail":"{read_only:false; response_revision:1399; number_of_response:1; }","duration":"393.956218ms","start":"2024-09-16T10:25:42.129887Z","end":"2024-09-16T10:25:42.523844Z","steps":["trace[565706559] 'process raft request'  (duration: 393.821788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.524080Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.129864Z","time spent":"394.119545ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1398 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.533976Z","caller":"traceutil/trace.go:171","msg":"trace[668376333] linearizableReadLoop","detail":"{readStateIndex:1459; appliedIndex:1458; }","duration":"302.69985ms","start":"2024-09-16T10:25:42.231262Z","end":"2024-09-16T10:25:42.533962Z","steps":["trace[668376333] 'read index received'  (duration: 293.491454ms)","trace[668376333] 'applied index is now lower than readState.Index'  (duration: 9.207628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:42.535969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.605451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-09-16T10:25:42.536065Z","caller":"traceutil/trace.go:171","msg":"trace[19888550] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1400; }","duration":"205.726154ms","start":"2024-09-16T10:25:42.330329Z","end":"2024-09-16T10:25:42.536056Z","steps":["trace[19888550] 'agreement among raft nodes before linearized reading'  (duration: 205.527055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.536191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.924785ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:42.536244Z","caller":"traceutil/trace.go:171","msg":"trace[1740705082] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1400; }","duration":"304.971706ms","start":"2024-09-16T10:25:42.231257Z","end":"2024-09-16T10:25:42.536228Z","steps":["trace[1740705082] 'agreement among raft nodes before linearized reading'  (duration: 304.915956ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:42.537030Z","caller":"traceutil/trace.go:171","msg":"trace[778126279] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"337.225123ms","start":"2024-09-16T10:25:42.199749Z","end":"2024-09-16T10:25:42.536974Z","steps":["trace[778126279] 'process raft request'  (duration: 333.931313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.537228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.199733Z","time spent":"337.391985ms","remote":"127.0.0.1:51498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-001438\" mod_revision:1384 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-001438\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-001438\" > >"}
	
	
	==> gcp-auth [c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7] <==
	2024/09/16 10:25:06 GCP Auth Webhook started!
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	
	
	==> kernel <==
	 10:31:10 up 9 min,  0 users,  load average: 0.06, 0.49, 0.40
	Linux addons-001438 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77] <==
	I0916 10:22:40.932409       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0916 10:22:42.426039       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.146.100"}
	I0916 10:22:42.456409       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:22:42.660969       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.102.193"}
	I0916 10:22:44.945009       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.134.141"}
	W0916 10:23:38.948410       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.948711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:23:38.949896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:23:38.958493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.958543       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 10:23:38.959752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0916 10:24:18.395108       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:18.395300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:18.395442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 10:24:18.398244       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	I0916 10:24:18.453414       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:25:09.633337       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.80.80"}
	I0916 10:27:07.962789       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:27:08.990230       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3] <==
	I0916 10:27:16.859651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="77.465µs"
	W0916 10:27:17.976531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:17.976597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:18.171334       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:27:19.596965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="4.818µs"
	W0916 10:27:29.140580       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:29.140708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:32.400681       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:27:32.400818       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:27:32.833300       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:27:32.833453       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 10:27:52.111053       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:52.111207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:28:17.834164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:28:17.834292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:29:03.861818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="211.968µs"
	W0916 10:29:17.755994       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:29:17.756149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:29:18.856763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="136.61µs"
	W0916 10:30:11.061208       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:11.061443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:30:57.147741       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:57.147896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:31:09.101904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.613µs"
	I0916 10:31:09.180185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	
	
	==> kube-proxy [60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:22:35.282699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:22:35.409784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0916 10:22:35.409847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:22:36.135283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:22:36.135476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:22:36.135545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:22:36.146626       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:22:36.146849       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:22:36.146861       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:22:36.156579       1 config.go:199] "Starting service config controller"
	I0916 10:22:36.156604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:22:36.166809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:22:36.166838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:22:36.168180       1 config.go:328] "Starting node config controller"
	I0916 10:22:36.168189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:22:36.258515       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:22:36.268518       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:22:36.268639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237] <==
	W0916 10:22:25.363221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:25.363254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:22:25.363420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:22:25.363573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:22:25.363425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:25.363941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.174422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:22:26.174473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.225213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:26.225308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.333904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:22:26.333957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.350221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:22:26.350326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.406843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:26.406982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.446248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:22:26.446395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.547116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:22:26.547206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.704254       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:22:26.704303       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:22:28.953769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:30:27 addons-001438 kubelet[1200]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:30:28 addons-001438 kubelet[1200]: E0916 10:30:28.226096    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482628225334910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:28 addons-001438 kubelet[1200]: E0916 10:30:28.226123    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482628225334910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:34 addons-001438 kubelet[1200]: E0916 10:30:34.841692    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\"\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:30:38 addons-001438 kubelet[1200]: E0916 10:30:38.228542    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482638228076062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:38 addons-001438 kubelet[1200]: E0916 10:30:38.228926    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482638228076062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:40 addons-001438 kubelet[1200]: I0916 10:30:40.839662    1200 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-j5ndn" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:30:48 addons-001438 kubelet[1200]: E0916 10:30:48.232295    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482648231815580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:48 addons-001438 kubelet[1200]: E0916 10:30:48.232991    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482648231815580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:49 addons-001438 kubelet[1200]: E0916 10:30:49.840427    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\"\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:30:58 addons-001438 kubelet[1200]: E0916 10:30:58.235433    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482658234973287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:58 addons-001438 kubelet[1200]: E0916 10:30:58.235474    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482658234973287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:01 addons-001438 kubelet[1200]: E0916 10:31:01.843288    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\"\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:31:08 addons-001438 kubelet[1200]: E0916 10:31:08.239282    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482668238871323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:08 addons-001438 kubelet[1200]: E0916 10:31:08.239653    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482668238871323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.552849    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-tmp-dir\") pod \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\" (UID: \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\") "
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.552910    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfr2l\" (UniqueName: \"kubernetes.io/projected/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-kube-api-access-nfr2l\") pod \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\" (UID: \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\") "
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.553725    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d" (UID: "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.557317    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-kube-api-access-nfr2l" (OuterVolumeSpecName: "kube-api-access-nfr2l") pod "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d" (UID: "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d"). InnerVolumeSpecName "kube-api-access-nfr2l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.653408    1200 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-tmp-dir\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.653485    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nfr2l\" (UniqueName: \"kubernetes.io/projected/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-kube-api-access-nfr2l\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.764247    1200 scope.go:117] "RemoveContainer" containerID="0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.797878    1200 scope.go:117] "RemoveContainer" containerID="0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: E0916 10:31:10.801088    1200 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\": container with ID starting with 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba not found: ID does not exist" containerID="0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.801139    1200 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"} err="failed to get container status \"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\": rpc error: code = NotFound desc = could not find container \"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\": container with ID starting with 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba not found: ID does not exist"
	
	
	==> storage-provisioner [20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e] <==
	I0916 10:22:41.307950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:22:41.369058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:22:41.369154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:22:41.391597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:22:41.391782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	I0916 10:22:41.394290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b3cde4-08a8-47d7-a9cc-7251679ab4d1", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b became leader
	I0916 10:22:41.492688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (473.303µs)
helpers_test.go:263: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/MetricsServer (316.03s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (100.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.100178ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004224922s
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (412.975µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (407.789µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (411.155µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (396.352µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (410.164µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (453.317µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (438.703µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (407.432µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (461.48µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (384.981µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (414.676µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-001438 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (389.769µs)
addons_test.go:489: failed checking helm tiller: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 addons disable helm-tiller --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-001438 -n addons-001438
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 logs -n 25: (1.412060248s)
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | --download-only -p                   | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-928489                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42715               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-928489              | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                  | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| start   | -p addons-001438 --wait=true         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	| ip      | addons-001438 ip                     | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:42.990297   12265 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:42.990427   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990438   12265 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:42.990444   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990619   12265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:42.991237   12265 out.go:352] Setting JSON to false
	I0916 10:21:42.992075   12265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":253,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:42.992165   12265 start.go:139] virtualization: kvm guest
	I0916 10:21:42.994057   12265 out.go:177] * [addons-001438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:42.995363   12265 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:21:42.995366   12265 notify.go:220] Checking for updates...
	I0916 10:21:42.996620   12265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:42.997884   12265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:42.999244   12265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.000448   12265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:21:43.001744   12265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:21:43.003140   12265 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:43.035292   12265 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:21:43.036591   12265 start.go:297] selected driver: kvm2
	I0916 10:21:43.036604   12265 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:43.036617   12265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:21:43.037618   12265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.037687   12265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:43.052612   12265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:43.052654   12265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:43.052880   12265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:21:43.052910   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:21:43.052948   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:43.052956   12265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:43.053000   12265 start.go:340] cluster config:
	{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:43.053089   12265 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.054779   12265 out.go:177] * Starting "addons-001438" primary control-plane node in "addons-001438" cluster
	I0916 10:21:43.056048   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:21:43.056073   12265 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:43.056099   12265 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:43.056171   12265 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:21:43.056181   12265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:21:43.056464   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:21:43.056479   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json: {Name:mke7feffe145119f1110e818375562c2195d4fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:21:43.056601   12265 start.go:360] acquireMachinesLock for addons-001438: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:21:43.056638   12265 start.go:364] duration metric: took 25.099µs to acquireMachinesLock for "addons-001438"
	I0916 10:21:43.056653   12265 start.go:93] Provisioning new machine with config: &{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:21:43.056703   12265 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:21:43.058226   12265 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:21:43.058340   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:21:43.058376   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:21:43.072993   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0916 10:21:43.073475   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:21:43.073995   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:21:43.074020   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:21:43.074422   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:21:43.074620   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:21:43.074787   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:21:43.074946   12265 start.go:159] libmachine.API.Create for "addons-001438" (driver="kvm2")
	I0916 10:21:43.074989   12265 client.go:168] LocalClient.Create starting
	I0916 10:21:43.075021   12265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:21:43.311518   12265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:21:43.475888   12265 main.go:141] libmachine: Running pre-create checks...
	I0916 10:21:43.475917   12265 main.go:141] libmachine: (addons-001438) Calling .PreCreateCheck
	I0916 10:21:43.476396   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:21:43.476796   12265 main.go:141] libmachine: Creating machine...
	I0916 10:21:43.476809   12265 main.go:141] libmachine: (addons-001438) Calling .Create
	I0916 10:21:43.476954   12265 main.go:141] libmachine: (addons-001438) Creating KVM machine...
	I0916 10:21:43.478137   12265 main.go:141] libmachine: (addons-001438) DBG | found existing default KVM network
	I0916 10:21:43.478893   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.478751   12287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0916 10:21:43.478937   12265 main.go:141] libmachine: (addons-001438) DBG | created network xml: 
	I0916 10:21:43.478958   12265 main.go:141] libmachine: (addons-001438) DBG | <network>
	I0916 10:21:43.478967   12265 main.go:141] libmachine: (addons-001438) DBG |   <name>mk-addons-001438</name>
	I0916 10:21:43.478974   12265 main.go:141] libmachine: (addons-001438) DBG |   <dns enable='no'/>
	I0916 10:21:43.478986   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.478998   12265 main.go:141] libmachine: (addons-001438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:21:43.479006   12265 main.go:141] libmachine: (addons-001438) DBG |     <dhcp>
	I0916 10:21:43.479018   12265 main.go:141] libmachine: (addons-001438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:21:43.479026   12265 main.go:141] libmachine: (addons-001438) DBG |     </dhcp>
	I0916 10:21:43.479036   12265 main.go:141] libmachine: (addons-001438) DBG |   </ip>
	I0916 10:21:43.479087   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.479109   12265 main.go:141] libmachine: (addons-001438) DBG | </network>
	I0916 10:21:43.479150   12265 main.go:141] libmachine: (addons-001438) DBG | 
	I0916 10:21:43.484546   12265 main.go:141] libmachine: (addons-001438) DBG | trying to create private KVM network mk-addons-001438 192.168.39.0/24...
	I0916 10:21:43.547822   12265 main.go:141] libmachine: (addons-001438) DBG | private KVM network mk-addons-001438 192.168.39.0/24 created
	I0916 10:21:43.547845   12265 main.go:141] libmachine: (addons-001438) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.547862   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.547813   12287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.547875   12265 main.go:141] libmachine: (addons-001438) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:43.547936   12265 main.go:141] libmachine: (addons-001438) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:21:43.797047   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.796916   12287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa...
	I0916 10:21:43.906021   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.905909   12287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk...
	I0916 10:21:43.906051   12265 main.go:141] libmachine: (addons-001438) DBG | Writing magic tar header
	I0916 10:21:43.906060   12265 main.go:141] libmachine: (addons-001438) DBG | Writing SSH key tar header
	I0916 10:21:43.906067   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.906027   12287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.906123   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438
	I0916 10:21:43.906172   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 (perms=drwx------)
	I0916 10:21:43.906194   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:21:43.906204   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:21:43.906222   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:21:43.906230   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.906236   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:21:43.906243   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:21:43.906248   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:21:43.906258   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:43.906264   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:21:43.906275   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:21:43.906309   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:21:43.906325   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home
	I0916 10:21:43.906338   12265 main.go:141] libmachine: (addons-001438) DBG | Skipping /home - not owner
	I0916 10:21:43.907204   12265 main.go:141] libmachine: (addons-001438) define libvirt domain using xml: 
	I0916 10:21:43.907223   12265 main.go:141] libmachine: (addons-001438) <domain type='kvm'>
	I0916 10:21:43.907235   12265 main.go:141] libmachine: (addons-001438)   <name>addons-001438</name>
	I0916 10:21:43.907246   12265 main.go:141] libmachine: (addons-001438)   <memory unit='MiB'>4000</memory>
	I0916 10:21:43.907255   12265 main.go:141] libmachine: (addons-001438)   <vcpu>2</vcpu>
	I0916 10:21:43.907265   12265 main.go:141] libmachine: (addons-001438)   <features>
	I0916 10:21:43.907274   12265 main.go:141] libmachine: (addons-001438)     <acpi/>
	I0916 10:21:43.907282   12265 main.go:141] libmachine: (addons-001438)     <apic/>
	I0916 10:21:43.907294   12265 main.go:141] libmachine: (addons-001438)     <pae/>
	I0916 10:21:43.907307   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907318   12265 main.go:141] libmachine: (addons-001438)   </features>
	I0916 10:21:43.907327   12265 main.go:141] libmachine: (addons-001438)   <cpu mode='host-passthrough'>
	I0916 10:21:43.907337   12265 main.go:141] libmachine: (addons-001438)   
	I0916 10:21:43.907349   12265 main.go:141] libmachine: (addons-001438)   </cpu>
	I0916 10:21:43.907364   12265 main.go:141] libmachine: (addons-001438)   <os>
	I0916 10:21:43.907373   12265 main.go:141] libmachine: (addons-001438)     <type>hvm</type>
	I0916 10:21:43.907383   12265 main.go:141] libmachine: (addons-001438)     <boot dev='cdrom'/>
	I0916 10:21:43.907392   12265 main.go:141] libmachine: (addons-001438)     <boot dev='hd'/>
	I0916 10:21:43.907402   12265 main.go:141] libmachine: (addons-001438)     <bootmenu enable='no'/>
	I0916 10:21:43.907415   12265 main.go:141] libmachine: (addons-001438)   </os>
	I0916 10:21:43.907427   12265 main.go:141] libmachine: (addons-001438)   <devices>
	I0916 10:21:43.907435   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='cdrom'>
	I0916 10:21:43.907452   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/boot2docker.iso'/>
	I0916 10:21:43.907463   12265 main.go:141] libmachine: (addons-001438)       <target dev='hdc' bus='scsi'/>
	I0916 10:21:43.907489   12265 main.go:141] libmachine: (addons-001438)       <readonly/>
	I0916 10:21:43.907508   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907518   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='disk'>
	I0916 10:21:43.907531   12265 main.go:141] libmachine: (addons-001438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:21:43.907547   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk'/>
	I0916 10:21:43.907558   12265 main.go:141] libmachine: (addons-001438)       <target dev='hda' bus='virtio'/>
	I0916 10:21:43.907568   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907583   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907595   12265 main.go:141] libmachine: (addons-001438)       <source network='mk-addons-001438'/>
	I0916 10:21:43.907606   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907616   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907624   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907634   12265 main.go:141] libmachine: (addons-001438)       <source network='default'/>
	I0916 10:21:43.907645   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907667   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907687   12265 main.go:141] libmachine: (addons-001438)     <serial type='pty'>
	I0916 10:21:43.907697   12265 main.go:141] libmachine: (addons-001438)       <target port='0'/>
	I0916 10:21:43.907706   12265 main.go:141] libmachine: (addons-001438)     </serial>
	I0916 10:21:43.907717   12265 main.go:141] libmachine: (addons-001438)     <console type='pty'>
	I0916 10:21:43.907735   12265 main.go:141] libmachine: (addons-001438)       <target type='serial' port='0'/>
	I0916 10:21:43.907745   12265 main.go:141] libmachine: (addons-001438)     </console>
	I0916 10:21:43.907758   12265 main.go:141] libmachine: (addons-001438)     <rng model='virtio'>
	I0916 10:21:43.907772   12265 main.go:141] libmachine: (addons-001438)       <backend model='random'>/dev/random</backend>
	I0916 10:21:43.907777   12265 main.go:141] libmachine: (addons-001438)     </rng>
	I0916 10:21:43.907785   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907794   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907804   12265 main.go:141] libmachine: (addons-001438)   </devices>
	I0916 10:21:43.907814   12265 main.go:141] libmachine: (addons-001438) </domain>
	I0916 10:21:43.907826   12265 main.go:141] libmachine: (addons-001438) 
	I0916 10:21:43.913322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:98:e7:17 in network default
	I0916 10:21:43.913924   12265 main.go:141] libmachine: (addons-001438) Ensuring networks are active...
	I0916 10:21:43.913942   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:43.914588   12265 main.go:141] libmachine: (addons-001438) Ensuring network default is active
	I0916 10:21:43.914879   12265 main.go:141] libmachine: (addons-001438) Ensuring network mk-addons-001438 is active
	I0916 10:21:43.915337   12265 main.go:141] libmachine: (addons-001438) Getting domain xml...
	I0916 10:21:43.915987   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:45.289678   12265 main.go:141] libmachine: (addons-001438) Waiting to get IP...
	I0916 10:21:45.290387   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.290811   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.290836   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.290776   12287 retry.go:31] will retry after 253.823507ms: waiting for machine to come up
	I0916 10:21:45.546308   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.546737   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.546757   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.546713   12287 retry.go:31] will retry after 316.98215ms: waiting for machine to come up
	I0916 10:21:45.865275   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.865712   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.865742   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.865673   12287 retry.go:31] will retry after 438.875906ms: waiting for machine to come up
	I0916 10:21:46.306361   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.306829   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.306854   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.306787   12287 retry.go:31] will retry after 378.922529ms: waiting for machine to come up
	I0916 10:21:46.687272   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.687683   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.687718   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.687648   12287 retry.go:31] will retry after 695.664658ms: waiting for machine to come up
	I0916 10:21:47.384623   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:47.385017   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:47.385044   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:47.384985   12287 retry.go:31] will retry after 669.1436ms: waiting for machine to come up
	I0916 10:21:48.056603   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.057159   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.057183   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.057099   12287 retry.go:31] will retry after 739.217064ms: waiting for machine to come up
	I0916 10:21:48.798348   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.798788   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.798824   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.798748   12287 retry.go:31] will retry after 963.828739ms: waiting for machine to come up
	I0916 10:21:49.763677   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:49.764095   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:49.764120   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:49.764043   12287 retry.go:31] will retry after 1.625531991s: waiting for machine to come up
	I0916 10:21:51.391980   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:51.392322   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:51.392343   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:51.392285   12287 retry.go:31] will retry after 1.960554167s: waiting for machine to come up
	I0916 10:21:53.354469   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:53.354989   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:53.355016   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:53.354937   12287 retry.go:31] will retry after 2.035806393s: waiting for machine to come up
	I0916 10:21:55.393065   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:55.393432   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:55.393451   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:55.393400   12287 retry.go:31] will retry after 3.028756428s: waiting for machine to come up
	I0916 10:21:58.424174   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:58.424544   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:58.424577   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:58.424517   12287 retry.go:31] will retry after 3.769682763s: waiting for machine to come up
	I0916 10:22:02.198084   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:02.198470   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:22:02.198492   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:22:02.198430   12287 retry.go:31] will retry after 5.547519077s: waiting for machine to come up
	I0916 10:22:07.750830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751191   12265 main.go:141] libmachine: (addons-001438) Found IP for machine: 192.168.39.72
	I0916 10:22:07.751209   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has current primary IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751215   12265 main.go:141] libmachine: (addons-001438) Reserving static IP address...
	I0916 10:22:07.751548   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "addons-001438", mac: "52:54:00:9c:55:19", ip: "192.168.39.72"} in network mk-addons-001438
	I0916 10:22:07.821469   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:07.821506   12265 main.go:141] libmachine: (addons-001438) Reserved static IP address: 192.168.39.72
	I0916 10:22:07.821523   12265 main.go:141] libmachine: (addons-001438) Waiting for SSH to be available...
	I0916 10:22:07.823797   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.824029   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438
	I0916 10:22:07.824057   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find defined IP address of network mk-addons-001438 interface with MAC address 52:54:00:9c:55:19
	I0916 10:22:07.824199   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:07.824226   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:07.824261   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:07.824273   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:07.824297   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:07.835394   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: exit status 255: 
	I0916 10:22:07.835415   12265 main.go:141] libmachine: (addons-001438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 10:22:07.835421   12265 main.go:141] libmachine: (addons-001438) DBG | command : exit 0
	I0916 10:22:07.835428   12265 main.go:141] libmachine: (addons-001438) DBG | err     : exit status 255
	I0916 10:22:07.835435   12265 main.go:141] libmachine: (addons-001438) DBG | output  : 
	I0916 10:22:10.838181   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:10.840410   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840805   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.840830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840953   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:10.840980   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:10.841012   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:10.841026   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:10.841039   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:10.969218   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: <nil>: 
	I0916 10:22:10.969498   12265 main.go:141] libmachine: (addons-001438) KVM machine creation complete!
	I0916 10:22:10.969791   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:10.970351   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970568   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970704   12265 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:22:10.970716   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:10.971844   12265 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:22:10.971857   12265 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:22:10.971863   12265 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:22:10.971871   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:10.973963   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974287   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.974322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974443   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:10.974600   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974766   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974897   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:10.975056   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:10.975258   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:10.975270   12265 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:22:11.084303   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.084322   12265 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:22:11.084329   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.086985   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087399   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.087449   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087637   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.087805   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.087957   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.088052   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.088212   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.088404   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.088420   12265 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:22:11.197622   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:22:11.197666   12265 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:22:11.197674   12265 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:22:11.197683   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.197922   12265 buildroot.go:166] provisioning hostname "addons-001438"
	I0916 10:22:11.197936   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.198131   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.200614   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.200955   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.200988   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.201100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.201269   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201396   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201536   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.201681   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.201878   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.201891   12265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-001438 && echo "addons-001438" | sudo tee /etc/hostname
	I0916 10:22:11.329393   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-001438
	
	I0916 10:22:11.329423   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.332085   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332370   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.332397   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332557   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.332746   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332868   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332999   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.333118   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.333336   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.333353   12265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-001438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-001438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-001438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:22:11.454462   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.454486   12265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:22:11.454539   12265 buildroot.go:174] setting up certificates
	I0916 10:22:11.454553   12265 provision.go:84] configureAuth start
	I0916 10:22:11.454562   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.454823   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:11.457458   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.457872   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.457902   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.458065   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.460166   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460456   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.460484   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460579   12265 provision.go:143] copyHostCerts
	I0916 10:22:11.460674   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:22:11.460835   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:22:11.460925   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:22:11.460997   12265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.addons-001438 san=[127.0.0.1 192.168.39.72 addons-001438 localhost minikube]
	I0916 10:22:11.639072   12265 provision.go:177] copyRemoteCerts
	I0916 10:22:11.639141   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:22:11.639169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.641767   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642050   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.642076   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642240   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.642415   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.642519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.642635   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:11.727509   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:22:11.752436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:22:11.776436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:22:11.799597   12265 provision.go:87] duration metric: took 345.032702ms to configureAuth
	I0916 10:22:11.799626   12265 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:22:11.799813   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:11.799904   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.802386   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.802700   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802854   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.803047   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803187   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803323   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.803504   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.803689   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.803704   12265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:22:12.030350   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:22:12.030374   12265 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:22:12.030382   12265 main.go:141] libmachine: (addons-001438) Calling .GetURL
	I0916 10:22:12.031607   12265 main.go:141] libmachine: (addons-001438) DBG | Using libvirt version 6000000
	I0916 10:22:12.034008   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034296   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.034325   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034451   12265 main.go:141] libmachine: Docker is up and running!
	I0916 10:22:12.034463   12265 main.go:141] libmachine: Reticulating splines...
	I0916 10:22:12.034470   12265 client.go:171] duration metric: took 28.959474569s to LocalClient.Create
	I0916 10:22:12.034491   12265 start.go:167] duration metric: took 28.959547297s to libmachine.API.Create "addons-001438"
	I0916 10:22:12.034500   12265 start.go:293] postStartSetup for "addons-001438" (driver="kvm2")
	I0916 10:22:12.034509   12265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:22:12.034535   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.034731   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:22:12.034762   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.036747   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037041   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.037068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037200   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.037344   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.037486   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.037623   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.123403   12265 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:22:12.127815   12265 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:22:12.127838   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:22:12.127904   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:22:12.127926   12265 start.go:296] duration metric: took 93.420957ms for postStartSetup
	I0916 10:22:12.127955   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:12.128519   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.131232   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131510   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.131547   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131776   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:22:12.131949   12265 start.go:128] duration metric: took 29.075237515s to createHost
	I0916 10:22:12.131975   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.133967   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134281   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.134305   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134418   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.134606   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134753   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134877   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.135036   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:12.135185   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:12.135202   12265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:22:12.245734   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482132.226578519
	
	I0916 10:22:12.245757   12265 fix.go:216] guest clock: 1726482132.226578519
	I0916 10:22:12.245764   12265 fix.go:229] Guest: 2024-09-16 10:22:12.226578519 +0000 UTC Remote: 2024-09-16 10:22:12.131960304 +0000 UTC m=+29.174301435 (delta=94.618215ms)
	I0916 10:22:12.245784   12265 fix.go:200] guest clock delta is within tolerance: 94.618215ms
	I0916 10:22:12.245790   12265 start.go:83] releasing machines lock for "addons-001438", held for 29.189143417s
	I0916 10:22:12.245809   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.246014   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.248419   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248678   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.248704   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248832   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249314   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249485   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249586   12265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:22:12.249653   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.249707   12265 ssh_runner.go:195] Run: cat /version.json
	I0916 10:22:12.249728   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.252249   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252497   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252634   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252657   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252757   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.252904   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252922   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.252925   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.253038   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.253093   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253241   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.253258   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.253386   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253515   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.362639   12265 ssh_runner.go:195] Run: systemctl --version
	I0916 10:22:12.368512   12265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:22:12.527002   12265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:22:12.532733   12265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:22:12.532791   12265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:22:12.548743   12265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:22:12.548773   12265 start.go:495] detecting cgroup driver to use...
	I0916 10:22:12.548843   12265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:22:12.564219   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:22:12.578224   12265 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:22:12.578276   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:22:12.591434   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:22:12.604674   12265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:22:12.712713   12265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:22:12.868881   12265 docker.go:233] disabling docker service ...
	I0916 10:22:12.868945   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:22:12.883262   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:22:12.896034   12265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:22:13.009183   12265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:22:13.123591   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:22:13.137411   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:22:13.155768   12265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:22:13.155832   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.166378   12265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:22:13.166436   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.177199   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.187753   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.198460   12265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:22:13.209356   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.220222   12265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.237721   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.247992   12265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:22:13.257214   12265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:22:13.257274   12265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:22:13.269843   12265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:22:13.279361   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:13.392424   12265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:22:13.489919   12265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:22:13.490002   12265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:22:13.495269   12265 start.go:563] Will wait 60s for crictl version
	I0916 10:22:13.495342   12265 ssh_runner.go:195] Run: which crictl
	I0916 10:22:13.499375   12265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:22:13.543037   12265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:22:13.543161   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.571422   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.600892   12265 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:22:13.602164   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:13.604725   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605053   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:13.605090   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605239   12265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:22:13.609153   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:13.621451   12265 kubeadm.go:883] updating cluster {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:22:13.621560   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:13.621616   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:13.653616   12265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:22:13.653695   12265 ssh_runner.go:195] Run: which lz4
	I0916 10:22:13.657722   12265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:22:13.661843   12265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:22:13.661873   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:22:14.968986   12265 crio.go:462] duration metric: took 1.311298771s to copy over tarball
	I0916 10:22:14.969053   12265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:22:17.073836   12265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104757919s)
	I0916 10:22:17.073872   12265 crio.go:469] duration metric: took 2.104858266s to extract the tarball
	I0916 10:22:17.073881   12265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:22:17.110316   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:17.150207   12265 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:22:17.150233   12265 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:22:17.150241   12265 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.1 crio true true} ...
	I0916 10:22:17.150343   12265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-001438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:22:17.150424   12265 ssh_runner.go:195] Run: crio config
	I0916 10:22:17.195725   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:17.195746   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:17.195756   12265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:22:17.195774   12265 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-001438 NodeName:addons-001438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:22:17.195915   12265 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-001438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:22:17.195969   12265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:22:17.206079   12265 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:22:17.206139   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:22:17.215719   12265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 10:22:17.232125   12265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:22:17.248126   12265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:22:17.264165   12265 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0916 10:22:17.267727   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:17.279787   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:17.393283   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:17.410756   12265 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438 for IP: 192.168.39.72
	I0916 10:22:17.410774   12265 certs.go:194] generating shared ca certs ...
	I0916 10:22:17.410794   12265 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.410949   12265 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:22:17.480758   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt ...
	I0916 10:22:17.480787   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt: {Name:mkc291c3a986acc7f4de9183c4ef6d249d8de5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.480965   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key ...
	I0916 10:22:17.480980   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key: {Name:mk56bc8b146d891ba5f741ad0bd339fffdb85989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.481075   12265 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:22:17.673219   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt ...
	I0916 10:22:17.673250   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt: {Name:mk8d6878492eab0d99f630fc495324e3b843781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673403   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key ...
	I0916 10:22:17.673414   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key: {Name:mk082b50320d253da8f01ad2454b69492e000fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673482   12265 certs.go:256] generating profile certs ...
	I0916 10:22:17.673531   12265 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key
	I0916 10:22:17.673544   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt with IP's: []
	I0916 10:22:17.921779   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt ...
	I0916 10:22:17.921811   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: {Name:mk9172b9e8f20da0dd399e583d4f0391784c25bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.921970   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key ...
	I0916 10:22:17.921981   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key: {Name:mk65d84f1710f9ab616402324cb2a91f749aa3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.922048   12265 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03
	I0916 10:22:17.922066   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0916 10:22:17.984449   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 ...
	I0916 10:22:17.984473   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03: {Name:mk697c0092db030ad4df50333f6d1db035d298e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984627   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 ...
	I0916 10:22:17.984638   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03: {Name:mkf74035add612ea1883fde9b662a919a8d7c5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984705   12265 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt
	I0916 10:22:17.984774   12265 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key
	I0916 10:22:17.984818   12265 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key
	I0916 10:22:17.984834   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt with IP's: []
	I0916 10:22:18.105094   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt ...
	I0916 10:22:18.105122   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt: {Name:mk12379583893d02aa599284bf7c2e673e4a585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105290   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key ...
	I0916 10:22:18.105300   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key: {Name:mkddc10c89aa36609a41c940a83606fa36ac69df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105453   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:22:18.105484   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:22:18.105509   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:22:18.105531   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:22:18.106125   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:22:18.132592   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:22:18.173674   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:22:18.200455   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:22:18.223366   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:22:18.246242   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:22:18.269411   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:22:18.292157   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:22:18.314508   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:22:18.337365   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:22:18.353286   12265 ssh_runner.go:195] Run: openssl version
	I0916 10:22:18.358942   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:22:18.369103   12265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373299   12265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373346   12265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.378948   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:22:18.389436   12265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:22:18.393342   12265 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:22:18.393387   12265 kubeadm.go:392] StartCluster: {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:18.393452   12265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:22:18.393509   12265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:22:18.429056   12265 cri.go:89] found id: ""
	I0916 10:22:18.429118   12265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:22:18.439123   12265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:22:18.448797   12265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:22:18.458281   12265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:22:18.458303   12265 kubeadm.go:157] found existing configuration files:
	
	I0916 10:22:18.458357   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:22:18.467304   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:22:18.467373   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:22:18.476476   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:22:18.485402   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:22:18.485467   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:22:18.494643   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.503578   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:22:18.503657   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.512633   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:22:18.521391   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:22:18.521454   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:22:18.530381   12265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:22:18.584992   12265 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:22:18.585058   12265 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:22:18.700906   12265 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:22:18.701050   12265 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:22:18.701195   12265 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:22:18.712665   12265 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:22:18.808124   12265 out.go:235]   - Generating certificates and keys ...
	I0916 10:22:18.808238   12265 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:22:18.808308   12265 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:22:18.808390   12265 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:22:18.884612   12265 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:22:19.103481   12265 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:22:19.230175   12265 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:22:19.422850   12265 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:22:19.423077   12265 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.499430   12265 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:22:19.499746   12265 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.689533   12265 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:22:19.770560   12265 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:22:20.159783   12265 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:22:20.160053   12265 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:22:20.575897   12265 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:22:20.728566   12265 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:22:21.092038   12265 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:22:21.382957   12265 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:22:21.446452   12265 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:22:21.447068   12265 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:22:21.451577   12265 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:22:21.454426   12265 out.go:235]   - Booting up control plane ...
	I0916 10:22:21.454540   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:22:21.454614   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:22:21.454722   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:22:21.468531   12265 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:22:21.475700   12265 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:22:21.475767   12265 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:22:21.606009   12265 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:22:21.606143   12265 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:22:22.124369   12265 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.881759ms
	I0916 10:22:22.124492   12265 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:22:27.123389   12265 kubeadm.go:310] [api-check] The API server is healthy after 5.002163965s
	I0916 10:22:27.138636   12265 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:22:27.154171   12265 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:22:27.185604   12265 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:22:27.185839   12265 kubeadm.go:310] [mark-control-plane] Marking the node addons-001438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:22:27.198602   12265 kubeadm.go:310] [bootstrap-token] Using token: os1o8m.q16efzg2rjnkpln8
	I0916 10:22:27.199966   12265 out.go:235]   - Configuring RBAC rules ...
	I0916 10:22:27.200085   12265 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:22:27.209733   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:22:27.218630   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:22:27.222473   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:22:27.226151   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:22:27.230516   12265 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:22:27.529586   12265 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:22:27.967178   12265 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:22:28.529936   12265 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:22:28.529960   12265 kubeadm.go:310] 
	I0916 10:22:28.530028   12265 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:22:28.530044   12265 kubeadm.go:310] 
	I0916 10:22:28.530137   12265 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:22:28.530173   12265 kubeadm.go:310] 
	I0916 10:22:28.530227   12265 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:22:28.530307   12265 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:22:28.530390   12265 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:22:28.530397   12265 kubeadm.go:310] 
	I0916 10:22:28.530463   12265 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:22:28.530472   12265 kubeadm.go:310] 
	I0916 10:22:28.530525   12265 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:22:28.530537   12265 kubeadm.go:310] 
	I0916 10:22:28.530609   12265 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:22:28.530728   12265 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:22:28.530832   12265 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:22:28.530868   12265 kubeadm.go:310] 
	I0916 10:22:28.530981   12265 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:22:28.531080   12265 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:22:28.531091   12265 kubeadm.go:310] 
	I0916 10:22:28.531215   12265 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531358   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:22:28.531389   12265 kubeadm.go:310] 	--control-plane 
	I0916 10:22:28.531397   12265 kubeadm.go:310] 
	I0916 10:22:28.531518   12265 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:22:28.531528   12265 kubeadm.go:310] 
	I0916 10:22:28.531639   12265 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531783   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:22:28.532220   12265 kubeadm.go:310] W0916 10:22:18.568727     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532498   12265 kubeadm.go:310] W0916 10:22:18.569597     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532623   12265 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:22:28.532635   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.532642   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:28.534239   12265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:22:28.535682   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:22:28.547306   12265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:22:28.567029   12265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:22:28.567083   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:28.567116   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-001438 minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-001438 minikube.k8s.io/primary=true
	I0916 10:22:28.599898   12265 ops.go:34] apiserver oom_adj: -16
	I0916 10:22:28.718193   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.219097   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.718331   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.219213   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.718728   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.218997   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.719218   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.218543   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.335651   12265 kubeadm.go:1113] duration metric: took 3.768632423s to wait for elevateKubeSystemPrivileges
	I0916 10:22:32.335685   12265 kubeadm.go:394] duration metric: took 13.942299744s to StartCluster
	I0916 10:22:32.335709   12265 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.335851   12265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:22:32.336274   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.336491   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:22:32.336522   12265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:22:32.336653   12265 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:22:32.336724   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.336769   12265 addons.go:69] Setting default-storageclass=true in profile "addons-001438"
	I0916 10:22:32.336779   12265 addons.go:69] Setting ingress-dns=true in profile "addons-001438"
	I0916 10:22:32.336787   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-001438"
	I0916 10:22:32.336780   12265 addons.go:69] Setting ingress=true in profile "addons-001438"
	I0916 10:22:32.336793   12265 addons.go:69] Setting cloud-spanner=true in profile "addons-001438"
	I0916 10:22:32.336813   12265 addons.go:69] Setting inspektor-gadget=true in profile "addons-001438"
	I0916 10:22:32.336820   12265 addons.go:69] Setting gcp-auth=true in profile "addons-001438"
	I0916 10:22:32.336832   12265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-001438"
	I0916 10:22:32.336835   12265 addons.go:234] Setting addon cloud-spanner=true in "addons-001438"
	I0916 10:22:32.336828   12265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-001438"
	I0916 10:22:32.336844   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-001438"
	I0916 10:22:32.336825   12265 addons.go:234] Setting addon inspektor-gadget=true in "addons-001438"
	I0916 10:22:32.336853   12265 addons.go:69] Setting registry=true in profile "addons-001438"
	I0916 10:22:32.336867   12265 addons.go:234] Setting addon registry=true in "addons-001438"
	I0916 10:22:32.336883   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336888   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336896   12265 addons.go:69] Setting helm-tiller=true in profile "addons-001438"
	I0916 10:22:32.336908   12265 addons.go:234] Setting addon helm-tiller=true in "addons-001438"
	I0916 10:22:32.336937   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336940   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336844   12265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-001438"
	I0916 10:22:32.337250   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337262   12265 addons.go:69] Setting volcano=true in profile "addons-001438"
	I0916 10:22:32.337273   12265 addons.go:234] Setting addon volcano=true in "addons-001438"
	I0916 10:22:32.337281   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337295   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337315   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336808   12265 addons.go:234] Setting addon ingress=true in "addons-001438"
	I0916 10:22:32.337347   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337348   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337365   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337367   12265 addons.go:69] Setting volumesnapshots=true in profile "addons-001438"
	I0916 10:22:32.337379   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337381   12265 addons.go:234] Setting addon volumesnapshots=true in "addons-001438"
	I0916 10:22:32.337412   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336888   12265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:32.337442   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336769   12265 addons.go:69] Setting yakd=true in profile "addons-001438"
	I0916 10:22:32.337489   12265 addons.go:234] Setting addon yakd=true in "addons-001438"
	I0916 10:22:32.337633   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337660   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336835   12265 addons.go:69] Setting metrics-server=true in profile "addons-001438"
	I0916 10:22:32.337353   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337714   12265 addons.go:234] Setting addon metrics-server=true in "addons-001438"
	I0916 10:22:32.337741   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337700   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337795   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336844   12265 mustload.go:65] Loading cluster: addons-001438
	I0916 10:22:32.336824   12265 addons.go:69] Setting storage-provisioner=true in profile "addons-001438"
	I0916 10:22:32.337840   12265 addons.go:234] Setting addon storage-provisioner=true in "addons-001438"
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337881   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336804   12265 addons.go:234] Setting addon ingress-dns=true in "addons-001438"
	I0916 10:22:32.337251   12265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-001438"
	I0916 10:22:32.337944   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338072   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338099   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338127   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338301   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338331   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338413   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338421   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338448   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338455   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338446   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338765   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338792   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338818   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338845   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338995   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.339304   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.339363   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.342405   12265 out.go:177] * Verifying Kubernetes components...
	I0916 10:22:32.343665   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:32.357106   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0916 10:22:32.357444   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0916 10:22:32.357655   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0916 10:22:32.357797   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.357897   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358211   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358403   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358419   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358562   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358574   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358633   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0916 10:22:32.358790   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.358949   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358960   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.359007   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0916 10:22:32.369699   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.369748   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.369818   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370020   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370060   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370069   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370101   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370194   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370269   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370379   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.370390   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.370789   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370827   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370908   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370969   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.371094   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.371111   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.371475   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371508   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371573   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.371638   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371663   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371731   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.386697   12265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-001438"
	I0916 10:22:32.386747   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.386763   12265 addons.go:234] Setting addon default-storageclass=true in "addons-001438"
	I0916 10:22:32.386810   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.387114   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387173   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.387252   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387291   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.408433   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0916 10:22:32.409200   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.409836   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.409856   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.410249   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.410830   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.410872   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.411145   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0916 10:22:32.411578   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.413298   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.413319   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.414168   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0916 10:22:32.414190   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0916 10:22:32.414292   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0916 10:22:32.414570   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.414671   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.415178   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.415195   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.415681   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.416214   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.416252   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.416442   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0916 10:22:32.416592   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417197   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.417231   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.417415   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0916 10:22:32.417454   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417595   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.417608   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.417843   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417917   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418037   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.418050   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.418410   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.418443   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.418409   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418501   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.419031   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.419065   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.419266   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419281   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419404   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419414   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419702   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.419847   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.420545   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.421091   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.421133   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.421574   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.421979   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0916 10:22:32.422963   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.423382   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.423399   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.423697   12265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:22:32.423813   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.424320   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.424354   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.425846   12265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:22:32.425941   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 10:22:32.426062   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0916 10:22:32.426213   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0916 10:22:32.426381   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426757   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426931   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.426942   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.426976   12265 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:22:32.426992   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:22:32.427011   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.427391   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.427470   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.427489   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.427946   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.428354   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428385   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.428598   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.428889   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428924   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.429188   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.429202   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.429517   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.431934   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0916 10:22:32.431987   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432541   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.432563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432751   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.432883   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.432998   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.433120   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.433712   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.435531   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.435730   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435742   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.435888   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.435899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:32.435907   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435913   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.436070   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.436085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 10:22:32.436166   12265 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:22:32.440699   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0916 10:22:32.441072   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.441617   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.441644   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.441979   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.442497   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.442531   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.450769   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0916 10:22:32.451259   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.451700   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.451718   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.452549   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.453092   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.453146   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.454430   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0916 10:22:32.454743   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455061   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455149   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0916 10:22:32.455842   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455847   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455860   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455871   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455922   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.456243   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456542   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456622   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.456639   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.456747   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.457901   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0916 10:22:32.458037   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.458209   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.458254   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.458704   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.458721   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.459089   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.459296   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.459533   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.460121   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.460511   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.460545   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.460978   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0916 10:22:32.461180   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.461244   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.461735   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.461753   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.461805   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.462195   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0916 10:22:32.462331   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.462809   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.464034   12265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:22:32.464150   12265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:22:32.464278   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.464668   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.464696   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.465237   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.466010   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.465566   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0916 10:22:32.466246   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:22:32.466259   12265 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:22:32.466276   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467014   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.467145   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.467235   12265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:22:32.467359   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:22:32.467370   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:22:32.467385   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467696   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.467711   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.468100   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468326   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.468710   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:22:32.468725   12265 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:22:32.468742   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.468966   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:22:32.469146   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.469463   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.469917   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.469918   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.470000   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.470971   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0916 10:22:32.471473   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.471695   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.472001   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.472015   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.472269   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:22:32.472471   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472523   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0916 10:22:32.472664   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472783   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.472993   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.473106   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.473134   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.473329   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.473377   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.473597   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.473743   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.473790   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.473851   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.474147   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:32.474163   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:22:32.474178   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.474793   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.474941   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.474955   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.475234   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.475510   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.475619   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475650   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.475665   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475824   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.476100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.476264   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.476604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.476644   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.476828   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.476940   12265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:22:32.477612   12265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:22:32.478260   12265 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.478276   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:22:32.478291   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.478585   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.478604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.478880   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.479035   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.479168   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.479299   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.479921   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.479937   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:22:32.479951   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.480259   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.480742   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.481958   12265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:22:32.482834   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0916 10:22:32.482998   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483118   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483310   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.483473   12265 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:22:32.483494   12265 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:22:32.483512   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.483802   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.483828   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.483888   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483903   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483899   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483930   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.484092   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.484159   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484194   12265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:22:32.484411   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.484581   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.484636   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484681   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.484892   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.484958   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.485096   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.485218   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.485248   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.485262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.485372   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.485494   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.485505   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:22:32.485519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.485781   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.486028   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.486181   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.486318   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.487186   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487422   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.487675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.487695   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487742   12265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.487752   12265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:22:32.487764   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.487810   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.487995   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.488225   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.488378   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.489702   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490168   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.490188   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490394   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.490571   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.490713   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.490823   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.492068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492458   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.492479   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492686   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.492906   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.492915   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0916 10:22:32.493044   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.493239   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.493450   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.493933   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.493950   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.494562   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.494891   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.496932   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.498147   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0916 10:22:32.498828   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:22:32.499232   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.499608   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.499634   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.499936   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.500124   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.500215   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:22:32.500241   12265 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:22:32.500262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.501763   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.503323   12265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:22:32.503738   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504260   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.504287   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504422   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.504578   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.504721   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.504800   12265 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:32.504813   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:22:32.504828   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.504844   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.507073   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0916 10:22:32.507489   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.507971   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.507994   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.508014   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 10:22:32.508383   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.508455   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0916 10:22:32.508996   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.509012   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509054   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509082   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509517   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.509552   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.509551   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.509573   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509882   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510086   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.510151   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.510169   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.510570   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.510576   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510696   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.510739   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.510822   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.510947   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.511685   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.511711   12265 retry.go:31] will retry after 323.390168ms: ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.513110   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.513548   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.515216   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:22:32.516467   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:22:32.517228   12265 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:22:32.518463   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:22:32.519691   12265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:22:32.521193   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:22:32.521287   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:32.521309   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:22:32.521330   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.523957   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:22:32.524563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.524915   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.524939   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.525078   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.525271   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.525408   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.525548   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.526174   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526199   12265 retry.go:31] will retry after 208.869548ms: ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526327   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:22:32.527568   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:22:32.528811   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:22:32.530140   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:22:32.530154   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:22:32.530169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.533281   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533666   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.533688   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533886   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.534072   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.534227   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.534367   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.700911   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:32.700984   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:22:32.785482   12265 node_ready.go:35] waiting up to 6m0s for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822842   12265 node_ready.go:49] node "addons-001438" has status "Ready":"True"
	I0916 10:22:32.822881   12265 node_ready.go:38] duration metric: took 37.361645ms for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822895   12265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:32.861506   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:22:32.861543   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:22:32.862634   12265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:32.929832   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.943014   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.952437   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.991347   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.995067   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:22:32.995096   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:22:33.036627   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:22:33.036657   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:22:33.036890   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:33.060821   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:22:33.060843   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:22:33.069120   12265 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:22:33.069156   12265 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:22:33.070018   12265 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:22:33.070038   12265 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:22:33.073512   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:22:33.073535   12265 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:22:33.137058   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:22:33.137088   12265 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:22:33.226855   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.226884   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:22:33.270492   12265 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:22:33.270513   12265 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:22:33.316169   12265 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.316195   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:22:33.316355   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:22:33.316373   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:22:33.316509   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:22:33.316522   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:22:33.327110   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:22:33.327126   12265 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:22:33.354597   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.420390   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:33.435680   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:22:33.435717   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:22:33.439954   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:22:33.439978   12265 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:22:33.444981   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.445002   12265 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:22:33.522524   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:33.536060   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:22:33.536089   12265 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:22:33.569830   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.590335   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:22:33.590366   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:22:33.601121   12265 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:22:33.601154   12265 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:22:33.623197   12265 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.623219   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:22:33.629904   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.693404   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.693424   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:22:33.747802   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.761431   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:22:33.761461   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:22:33.774811   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:22:33.774845   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:22:33.825893   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.895859   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:22:33.895893   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:22:34.018321   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:22:34.018345   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:22:34.260751   12265 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:22:34.260776   12265 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:22:34.288705   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:22:34.288733   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:22:34.575904   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:22:34.575932   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:22:34.578707   12265 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:34.578728   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:22:34.872174   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:35.002110   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:22:35.002133   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:22:35.053333   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47211504s)
	I0916 10:22:35.173178   12265 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.243289168s)
	I0916 10:22:35.173338   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173355   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.173706   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:35.173723   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.173737   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.173751   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173762   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.174037   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.174053   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.219712   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.219745   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.220033   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.220084   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.326225   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:22:35.326245   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:22:35.667079   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:35.667102   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:22:35.677467   12265 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-001438" context rescaled to 1 replicas
	I0916 10:22:36.005922   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:36.880549   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:37.248962   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.296492058s)
	I0916 10:22:37.249022   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249036   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.306004364s)
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.257675255s)
	I0916 10:22:37.249138   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249160   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249084   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249221   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249330   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249355   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249374   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249434   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249460   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249476   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249440   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249499   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249529   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249541   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249485   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249593   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249655   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249676   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251028   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251216   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251214   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251232   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251278   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251288   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:38.978538   12265 pod_ready.go:93] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:38.978561   12265 pod_ready.go:82] duration metric: took 6.115904528s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:38.978572   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179661   12265 pod_ready.go:93] pod "kube-apiserver-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.179691   12265 pod_ready.go:82] duration metric: took 201.112317ms for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179705   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377607   12265 pod_ready.go:93] pod "kube-controller-manager-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.377640   12265 pod_ready.go:82] duration metric: took 197.926831ms for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377656   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509747   12265 pod_ready.go:93] pod "kube-proxy-66flj" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.509775   12265 pod_ready.go:82] duration metric: took 132.110984ms for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509789   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633441   12265 pod_ready.go:93] pod "kube-scheduler-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.633475   12265 pod_ready.go:82] duration metric: took 123.676997ms for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633487   12265 pod_ready.go:39] duration metric: took 6.810577473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:39.633508   12265 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:22:39.633572   12265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:22:39.633966   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:22:39.634003   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:39.637511   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638022   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:39.638050   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638265   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:39.638449   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:39.638594   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:39.638741   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:40.248183   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:22:40.342621   12265 addons.go:234] Setting addon gcp-auth=true in "addons-001438"
	I0916 10:22:40.342682   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:40.343054   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.343105   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.358807   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0916 10:22:40.359276   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.359793   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.359818   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.360152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.360750   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.360794   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.375531   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0916 10:22:40.375999   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.376410   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.376431   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.376712   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.376880   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:40.378466   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:40.378706   12265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:22:40.378736   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:40.381488   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.381978   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:40.381997   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.382162   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:40.382374   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:40.382527   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:40.382728   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:41.185716   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.148787276s)
	I0916 10:22:41.185775   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185787   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185792   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.831162948s)
	I0916 10:22:41.185821   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185842   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185899   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.76548291s)
	I0916 10:22:41.185927   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185929   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.663383888s)
	I0916 10:22:41.185940   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185948   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185957   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186031   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616165984s)
	I0916 10:22:41.186072   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186084   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186162   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.55623363s)
	I0916 10:22:41.186179   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186188   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186223   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186233   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186246   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186249   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186272   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186279   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186321   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.438489786s)
	W0916 10:22:41.186349   12265 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186370   12265 retry.go:31] will retry after 282.502814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186323   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186452   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.360528333s)
	I0916 10:22:41.186474   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186483   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186530   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186552   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186580   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186592   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.133220852s)
	I0916 10:22:41.186602   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186608   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186609   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186627   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186684   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186691   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186698   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186704   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186797   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186826   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186833   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186851   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186871   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186884   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186893   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186901   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186907   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186936   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186943   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186990   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186999   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187006   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187013   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.187860   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.187892   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.187899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187912   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.188173   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.188191   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188200   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188204   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188209   12265 addons.go:475] Verifying addon metrics-server=true in "addons-001438"
	I0916 10:22:41.188211   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188241   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188250   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188259   12265 addons.go:475] Verifying addon ingress=true in "addons-001438"
	I0916 10:22:41.190004   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190036   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190042   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190099   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190137   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190141   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190152   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190155   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190159   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.190162   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190167   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.190170   12265 addons.go:475] Verifying addon registry=true in "addons-001438"
	I0916 10:22:41.190534   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190570   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190579   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.191944   12265 out.go:177] * Verifying registry addon...
	I0916 10:22:41.191953   12265 out.go:177] * Verifying ingress addon...
	I0916 10:22:41.192858   12265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-001438 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:22:41.245022   12265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:22:41.245042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:41.245048   12265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:22:41.245062   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.270906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.270924   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.271190   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.271210   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.469044   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:41.699366   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.699576   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.200823   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.201220   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.707853   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.708238   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.062276   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.056308906s)
	I0916 10:22:43.062328   12265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.428733709s)
	I0916 10:22:43.062359   12265 api_server.go:72] duration metric: took 10.72580389s to wait for apiserver process to appear ...
	I0916 10:22:43.062372   12265 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:22:43.062397   12265 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0916 10:22:43.062411   12265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.683683571s)
	I0916 10:22:43.062334   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062455   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.062799   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:43.062819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.062830   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.062838   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062846   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.063060   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.063085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.063094   12265 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:43.064955   12265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:22:43.065015   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:43.066605   12265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:22:43.067509   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:22:43.067847   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:22:43.067859   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:22:43.093271   12265 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0916 10:22:43.093820   12265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:22:43.093839   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.095011   12265 api_server.go:141] control plane version: v1.31.1
	I0916 10:22:43.095033   12265 api_server.go:131] duration metric: took 32.653755ms to wait for apiserver health ...
	I0916 10:22:43.095043   12265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:22:43.123828   12265 system_pods.go:59] 19 kube-system pods found
	I0916 10:22:43.123858   12265 system_pods.go:61] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.123864   12265 system_pods.go:61] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.123871   12265 system_pods.go:61] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.123876   12265 system_pods.go:61] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.123883   12265 system_pods.go:61] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.123886   12265 system_pods.go:61] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.123903   12265 system_pods.go:61] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.123906   12265 system_pods.go:61] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.123913   12265 system_pods.go:61] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.123917   12265 system_pods.go:61] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.123923   12265 system_pods.go:61] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.123928   12265 system_pods.go:61] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.123935   12265 system_pods.go:61] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.123943   12265 system_pods.go:61] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.123948   12265 system_pods.go:61] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.123955   12265 system_pods.go:61] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123960   12265 system_pods.go:61] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123967   12265 system_pods.go:61] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.123972   12265 system_pods.go:61] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.123980   12265 system_pods.go:74] duration metric: took 28.931422ms to wait for pod list to return data ...
	I0916 10:22:43.123988   12265 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:22:43.137057   12265 default_sa.go:45] found service account: "default"
	I0916 10:22:43.137084   12265 default_sa.go:55] duration metric: took 13.088793ms for default service account to be created ...
	I0916 10:22:43.137095   12265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:22:43.166020   12265 system_pods.go:86] 19 kube-system pods found
	I0916 10:22:43.166054   12265 system_pods.go:89] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.166063   12265 system_pods.go:89] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.166075   12265 system_pods.go:89] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.166088   12265 system_pods.go:89] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.166100   12265 system_pods.go:89] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.166108   12265 system_pods.go:89] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.166118   12265 system_pods.go:89] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.166126   12265 system_pods.go:89] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.166136   12265 system_pods.go:89] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.166145   12265 system_pods.go:89] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.166154   12265 system_pods.go:89] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.166164   12265 system_pods.go:89] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.166171   12265 system_pods.go:89] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.166178   12265 system_pods.go:89] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.166183   12265 system_pods.go:89] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.166199   12265 system_pods.go:89] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166207   12265 system_pods.go:89] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166217   12265 system_pods.go:89] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.166224   12265 system_pods.go:89] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.166231   12265 system_pods.go:126] duration metric: took 29.130167ms to wait for k8s-apps to be running ...
	I0916 10:22:43.166241   12265 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:22:43.166284   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:22:43.202957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.204822   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:43.205240   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:22:43.205259   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:22:43.339484   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.339511   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:22:43.533725   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.574829   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.701096   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.702516   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.074326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.199962   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.201086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:44.420432   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.951340242s)
	I0916 10:22:44.420484   12265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.25416987s)
	I0916 10:22:44.420496   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.420512   12265 system_svc.go:56] duration metric: took 1.254267923s WaitForService to wait for kubelet
	I0916 10:22:44.420530   12265 kubeadm.go:582] duration metric: took 12.083973387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:22:44.420555   12265 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:22:44.420516   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.420960   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.420998   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421011   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.421019   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.421041   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.421242   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.421289   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421306   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.432407   12265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:22:44.432433   12265 node_conditions.go:123] node cpu capacity is 2
	I0916 10:22:44.432443   12265 node_conditions.go:105] duration metric: took 11.883273ms to run NodePressure ...
	I0916 10:22:44.432454   12265 start.go:241] waiting for startup goroutines ...
	I0916 10:22:44.573423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.701968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.702167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.087788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.175284   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.64151941s)
	I0916 10:22:45.175340   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175356   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175638   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175658   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175667   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175675   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175907   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175959   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175966   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:45.176874   12265 addons.go:475] Verifying addon gcp-auth=true in "addons-001438"
	I0916 10:22:45.179151   12265 out.go:177] * Verifying gcp-auth addon...
	I0916 10:22:45.181042   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:22:45.204765   12265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:22:45.204788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.240576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.244884   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.572763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.684678   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.699294   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.700332   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.071926   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.184345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.198555   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.198584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.572691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.686213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.698404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.699290   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.073014   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.184892   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.199176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.199412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.573319   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.685117   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.698854   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.699042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.080702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.186042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.198652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:48.198985   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.572136   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.684922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.698643   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.698805   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.072263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.186126   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.198845   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.201291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.571909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.686134   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.699485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.699837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.072013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.185475   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.198803   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:50.198988   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.572410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.684716   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.698643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.698842   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.072735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.185327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.198402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.198563   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.574099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.684301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.698582   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.699135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.073280   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.184410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.197628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.197951   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.573111   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.685463   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.698350   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.698445   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.073318   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.185032   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.198371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.198982   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.572652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.684593   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.698434   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.699099   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.071466   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.184978   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.199125   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:54.199475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.684904   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.699578   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.700868   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.072026   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.186696   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.199421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.200454   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:55.811368   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.811883   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.811882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.812044   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.073000   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.197552   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.571945   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.684725   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.698164   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.698871   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.078099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.187093   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.198266   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.198788   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.572608   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.685182   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.698112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.698451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.072438   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.184226   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.197871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:58.199176   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.573655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.688012   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.698890   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.699498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.072908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.197825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.198094   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:59.572578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.685886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.699165   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.699539   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.072677   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.185334   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.198436   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.572620   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.684676   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.698184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.698937   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.368315   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.368647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:01.368662   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.369057   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.577610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.685792   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.699073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.700679   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.073297   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.184780   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.198423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.198632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.573860   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.688317   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.699137   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.699189   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.073268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.185286   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.197706   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:03.199446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.575016   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.688681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.697852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.699284   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.072561   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.184550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.198183   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.198692   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.573058   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.684410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.698448   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.699101   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.073082   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.198422   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.199510   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.572901   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.685013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.698419   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.699052   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.072680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.184899   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.199400   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.199960   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.573550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.698176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.386744   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.389015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:07.389529   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.391740   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.572440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.685517   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.699276   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.699495   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.073598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.185305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.198307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.198701   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.572936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.685042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.697898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.699045   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.073524   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.185170   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.197444   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.198282   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:09.571947   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.685269   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.700263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.700289   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.072367   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.184140   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.198279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.198501   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.571995   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.684443   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.698621   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.699212   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.072647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.184997   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.198336   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.199743   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.572138   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.684642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.697735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.698012   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.072087   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.184730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.198825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.199117   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.574471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.697610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.697875   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.074276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.200283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:13.200511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.572643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.687229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.700375   12265 kapi.go:107] duration metric: took 32.506622173s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:13.700476   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.073345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.185359   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.197920   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.714386   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.714848   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.072480   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.184006   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.198907   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.571536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.686990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.698314   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.072850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.397705   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.398059   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.571699   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.687893   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.701822   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.073078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.185433   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.202339   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.572915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.684909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.698215   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.071875   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.185548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.198104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.572180   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.684990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.698912   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.072105   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.184341   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.197977   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.571740   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.685205   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.698214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.071811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.184927   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.198225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.572184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.684471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.697550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.072526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.185439   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.198086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.573843   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.684530   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.699027   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.071583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.185751   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.201330   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.574078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.688728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.700516   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.072848   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.184719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.571975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.697845   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.071885   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.199755   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.209742   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.572903   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.684095   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.697255   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.072405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.185096   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.197451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.572250   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.685603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.699421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.072277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.197948   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.572954   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.684305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.698018   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.072121   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.186632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.198260   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.571710   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.685260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.697569   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.072712   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.185404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.197839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.572506   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.685719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.698390   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.073440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.185211   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.198135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.572871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.684795   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.698442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.074307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.184391   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.198195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.571684   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.686595   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.697829   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.072882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.184355   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.197913   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.572796   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.685340   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.697838   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.072358   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.185072   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.198841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.572260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.685619   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.697923   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.072329   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.184923   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.198461   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.572531   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.684886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.698221   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.071922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.184896   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.198347   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.572508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.685674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.698172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.072040   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.184401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.198192   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.571685   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.684934   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.699442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.072917   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.184575   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.197989   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.572782   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.685224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.697515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.073347   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.184633   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.198109   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.572239   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.684842   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.698412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.072639   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.184377   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.197723   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.572964   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.684944   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.698216   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.071865   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.184322   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.197583   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.572728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.697663   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.073346   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.184763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.198338   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.572748   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.688546   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.698337   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.072528   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.184742   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.197991   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.572832   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.685275   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.697957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.072948   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.185237   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.198222   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.572150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.685770   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.698107   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.072508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.198122   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.571791   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.685476   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.698021   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.072455   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.198450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.685519   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.698088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.073394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.184852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.198932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.572905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.685024   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.699000   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.072804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.185568   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.198040   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.571961   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.684879   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.698104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.071779   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.184794   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.198431   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.572786   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.685048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.701841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.072550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.184915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.198725   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.572850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.684405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.697953   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.075719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.185584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.198034   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.572642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.685074   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.697421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.072216   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.184736   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.198614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.572675   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.685508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.697632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.072878   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.185267   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.197508   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.684680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.698038   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.072225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.184256   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.197802   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.685760   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.699050   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.072698   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.185139   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.197417   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.572526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.684976   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.698186   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.071987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.184373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.197898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.573326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.685154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.699596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.071975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.184301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.197532   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.573068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.684535   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.698262   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.071830   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.185558   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.198149   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.684135   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.697614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.109030   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.216004   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.216775   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.572732   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.684811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.697899   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.071691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.198291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.572185   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.685478   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.698240   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.072727   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.185578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.207485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.684402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.698565   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.072447   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.192764   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.206954   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.573224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.685091   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.697997   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.071906   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.184428   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.197550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.572498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.685525   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.702647   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.072504   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.185219   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.197512   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.573858   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.685938   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.699556   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.080160   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.188056   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.197615   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.575213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.685187   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.697887   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.072585   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.185321   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.577876   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.685259   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.698763   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.073356   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.184332   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.197676   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.574632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.705119   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.705797   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.073702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.190460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.199492   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.573521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.685468   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.697671   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.074427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.211989   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.214167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.573479   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.684919   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.698441   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.184827   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.573401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.685277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.698457   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.072421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.184959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.198365   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.572446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.685036   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.697443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.072489   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.185143   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.197711   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.572704   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.685206   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.697839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.073656   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.185083   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.197443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.572739   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.685343   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.697853   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.072697   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.185630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.197928   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.572344   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.684814   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.698225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.073324   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.185254   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.198404   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.571987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.684709   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.698073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.072174   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.184688   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.198078   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.571798   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.685576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.698188   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.072810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.184683   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.198053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.574408   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.698415   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.072047   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.185423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.198010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.572968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.684217   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.697876   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.073276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.185372   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.197865   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.572327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.684929   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.698146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.073068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.185261   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.197596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.684379   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.697450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.072646   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.184810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.198157   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.684635   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.698108   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.073055   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.185325   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.572951   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.684268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.697542   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.073300   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.184458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.198058   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.571882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.684389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.698491   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.185150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.198444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.572557   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.686730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.697987   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.072389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.184902   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.198815   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.572090   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.684279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.072655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.185118   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.197515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.573029   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.684503   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.697942   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.073161   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.185394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.197824   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.572789   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.685536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.072248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.184713   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.198206   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.572681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.685404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.697732   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.073033   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.186532   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.197932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.573166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.684900   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.698494   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.072840   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.185112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.199554   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.573533   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.685513   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.698631   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.073563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.184668   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.198960   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.573373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.684371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.698226   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.072380   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.184889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.572427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.685015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.699053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.073225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.185241   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.198172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.572019   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.697511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.072382   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.185154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.198590   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.572333   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.688804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.699195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.072971   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.184395   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.197840   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.572457   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.684935   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.698247   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.072201   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.184817   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.198299   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.572603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.684807   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.698764   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.079460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.184783   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.198219   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.572155   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.684462   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.698249   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.071889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.185035   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.198639   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.572607   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.684993   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.698317   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.073167   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.187630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.197861   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.684449   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.698084   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.072598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.184553   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.198241   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.572543   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.685061   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.698066   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.072888   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.184279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.198475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.572908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.684166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.699214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.071396   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.185054   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.197274   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.571831   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.683617   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.073753   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.184818   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.198303   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.572754   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.685078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.697801   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.074144   12265 kapi.go:107] duration metric: took 1m59.00663205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:42.185287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.197975   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.685826   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.698484   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.185521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.197894   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.684695   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.698444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.184270   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.198072   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.686127   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.697760   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.184583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.197892   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.685284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.698273   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.197597   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.684852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.698234   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.185674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.197778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.684803   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.698286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.185195   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.197536   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.684936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.698202   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.185940   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.198354   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.685628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.698017   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.184172   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.197513   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.684563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.699121   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.185458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.197627   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.684548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.697728   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.184587   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.198088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.687284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.697762   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.185441   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.684856   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.698392   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.184966   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.198309   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.685059   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.697818   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.184799   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.199146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.685287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.697823   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.184982   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.198778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.684629   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.698010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.185306   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.197805   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.686354   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.697789   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.184048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.198685   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.685283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.697967   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.185357   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.198462   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.685857   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.698582   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.185027   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.199070   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.685248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.697584   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.444242   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.542180   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.684941   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.698345   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.184494   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.199673   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.686844   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.701197   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.186108   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.200286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.935418   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.936940   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.185837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.198343   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.685229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.697687   12265 kapi.go:107] duration metric: took 2m23.503933898s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:05.184162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.686162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.184784   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.685596   12265 kapi.go:107] duration metric: took 2m21.504550895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:06.687290   12265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-001438 cluster.
	I0916 10:25:06.688726   12265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:06.689940   12265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:06.691195   12265 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:06.692654   12265 addons.go:510] duration metric: took 2m34.356008246s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:06.692692   12265 start.go:246] waiting for cluster config update ...
	I0916 10:25:06.692714   12265 start.go:255] writing updated cluster config ...
	I0916 10:25:06.692960   12265 ssh_runner.go:195] Run: rm -f paused
	I0916 10:25:06.701459   12265 out.go:177] * Done! kubectl is now configured to use "addons-001438" cluster and "default" namespace by default
	E0916 10:25:06.702711   12265 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.160900219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482421160877921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf3e001c-4e95-4617-aa0a-b6b5cdc3594e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.161611850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d14ae164-93f3-4882-96de-54c92fa4b07c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.161689350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d14ae164-93f3-4882-96de-54c92fa4b07c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.162181581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f,PodSandboxId:b1b6b74be962699d277a04b3a408931dda56ff790e89190b3b8c465fc1a1c89d,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726482411862310664,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-k7c7v,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a,},Annotations:map[string]string{io.kubernetes.container.hash: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:
map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd
4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256
:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.i
o/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Ima
ge:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Nam
e:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d74
4179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f147d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-
4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.po
d.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,Stat
e:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0731c5d88d35f1d8b6c88fee881cced713fd9e6231df44c4f03289b577fa75a,PodSandboxId:4cf262411fb7c78bef294b8304a442f15f122eba8e6330163e0f6001e8b44f4c,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945
d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_EXITED,CreatedAt:1726482181618422606,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-b76fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a96b112c-4171-4416-9e14-ac1f69fd033e,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a
2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d14ae164-93f3-4882-96de-54c92fa4b07c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.200426853Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb14e55e-a776-41b0-ba10-7fa96d3fee0e name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.200511005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb14e55e-a776-41b0-ba10-7fa96d3fee0e name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.201590493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49c94079-11b1-4de7-89c9-1239f40f742b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.202564350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482421202540711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49c94079-11b1-4de7-89c9-1239f40f742b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.203266039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa1a83bc-567d-4c79-aba4-c92ffa3e6a7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.203323924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa1a83bc-567d-4c79-aba4-c92ffa3e6a7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.203881494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f,PodSandboxId:b1b6b74be962699d277a04b3a408931dda56ff790e89190b3b8c465fc1a1c89d,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726482411862310664,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-k7c7v,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a,},Annotations:map[string]string{io.kubernetes.container.hash: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:
map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd
4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256
:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.i
o/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Ima
ge:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Nam
e:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d74
4179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f147d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-
4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.po
d.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,Stat
e:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0731c5d88d35f1d8b6c88fee881cced713fd9e6231df44c4f03289b577fa75a,PodSandboxId:4cf262411fb7c78bef294b8304a442f15f122eba8e6330163e0f6001e8b44f4c,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945
d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_EXITED,CreatedAt:1726482181618422606,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-b76fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a96b112c-4171-4416-9e14-ac1f69fd033e,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a
2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa1a83bc-567d-4c79-aba4-c92ffa3e6a7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.250830998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11a4d304-1844-4b8c-a601-aac1d60d1e0c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.250931763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11a4d304-1844-4b8c-a601-aac1d60d1e0c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.252553429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94ba38d0-d545-4f8b-8189-04a53a89bcb0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.254016032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482421253991177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94ba38d0-d545-4f8b-8189-04a53a89bcb0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.254979257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a55d102-22d6-4cdb-8022-1d0f7cd52ca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.255183425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a55d102-22d6-4cdb-8022-1d0f7cd52ca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.255836395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f,PodSandboxId:b1b6b74be962699d277a04b3a408931dda56ff790e89190b3b8c465fc1a1c89d,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726482411862310664,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-k7c7v,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a,},Annotations:map[string]string{io.kubernetes.container.hash: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:
map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd
4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256
:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.i
o/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Ima
ge:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Nam
e:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d74
4179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f147d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-
4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.po
d.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,Stat
e:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0731c5d88d35f1d8b6c88fee881cced713fd9e6231df44c4f03289b577fa75a,PodSandboxId:4cf262411fb7c78bef294b8304a442f15f122eba8e6330163e0f6001e8b44f4c,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945
d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_EXITED,CreatedAt:1726482181618422606,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-b76fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a96b112c-4171-4416-9e14-ac1f69fd033e,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a
2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a55d102-22d6-4cdb-8022-1d0f7cd52ca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.292713024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=400656cb-48fa-4caa-9f88-cd61d0b10e24 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.292883770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=400656cb-48fa-4caa-9f88-cd61d0b10e24 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.294620789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af9eeaca-cda2-4ab7-9b47-1ecf5aed8aa2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.296295136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482421296267979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af9eeaca-cda2-4ab7-9b47-1ecf5aed8aa2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.297025756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0527ac42-ce30-408d-8c07-9e462f236411 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.297119763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0527ac42-ce30-408d-8c07-9e462f236411 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:01 addons-001438 crio[662]: time="2024-09-16 10:27:01.297985583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f,PodSandboxId:b1b6b74be962699d277a04b3a408931dda56ff790e89190b3b8c465fc1a1c89d,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,State:CONTAINER_EXITED,CreatedAt:1726482411862310664,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-k7c7v,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a,},Annotations:map[string]string{io.kubernetes.container.hash: f1a4d1ab,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:
map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd
4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256
:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.i
o/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Ima
ge:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Nam
e:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d74
4179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f147d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-
4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.po
d.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name
: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,Stat
e:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0731c5d88d35f1d8b6c88fee881cced713fd9e6231df44c4f03289b577fa75a,PodSandboxId:4cf262411fb7c78bef294b8304a442f15f122eba8e6330163e0f6001e8b44f4c,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945
d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_EXITED,CreatedAt:1726482181618422606,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-b76fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a96b112c-4171-4416-9e14-ac1f69fd033e,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a
2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0527ac42-ce30-408d-8c07-9e462f236411 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	44134363b5c5e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            9 seconds ago        Exited              gadget                                   4                   b1b6b74be9626       gadget-k7c7v
	c0c62d19fc341       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 About a minute ago   Running             gcp-auth                                 0                   81638f0641649       gcp-auth-89d5ffd79-jg5wz
	4d9f00ee52087       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             About a minute ago   Running             controller                               0                   f0a70a6b5b4fa       ingress-nginx-controller-bc57996ff-jhd4w
	a4ff4f2e6c350       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago        Running             csi-snapshotter                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	fa45fa1d889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	112e37da6f1b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	bcd9404de3e14       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	26165c7625a62       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	35e24c1abefe7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago        Running             csi-resizer                              0                   bf02d50932f14       csi-hostpath-resizer-0
	a5edaf3e2dd3d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	b8ebd2f050729       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   f375334740e2f       csi-hostpath-attacher-0
	0d52d2269e100       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             3 minutes ago        Exited              patch                                    1                   6fe91ac2288fe       ingress-nginx-admission-patch-rls9n
	54c4347a1fc2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   3 minutes ago        Exited              create                                   0                   d66b1317412a7       ingress-nginx-admission-create-dk6l8
	f0bde3324c47d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   0eef20d1c6813       snapshot-controller-56fcc65765-pv2sr
	f786c20ceffe3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   ec33782f42717       snapshot-controller-56fcc65765-8nq94
	d997d75b48ee4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   173b48ab2ab7f       local-path-provisioner-86d989889c-rj67m
	0024bbca27aac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        3 minutes ago        Running             metrics-server                           0                   8bcb0a4a20a5a       metrics-server-84c5f94fbc-9hj9f
	e13f898473193       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               3 minutes ago        Running             cloud-spanner-emulator                   0                   c90a44c7edea8       cloud-spanner-emulator-769b77f747-58ll2
	a0731c5d88d35       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Exited              tiller                                   0                   4cf262411fb7c       tiller-deploy-b48cc5f79-b76fb
	8193aad1beb5b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             4 minutes ago        Running             minikube-ingress-dns                     0                   f1a3772ce5f7d       kube-ingress-dns-minikube
	20d2f3360f320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   748d363148f66       storage-provisioner
	63d270cbed8d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             4 minutes ago        Running             coredns                                  0                   42b8586a7b29a       coredns-7c65d6cfc9-j5ndn
	60269ac0552c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             4 minutes ago        Running             kube-proxy                               0                   2bf9dc368debd       kube-proxy-66flj
	1aabe5cb48f97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             4 minutes ago        Running             etcd                                     0                   f7aeaa11c7f4c       etcd-addons-001438
	2d34a4e3596c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             4 minutes ago        Running             kube-controller-manager                  0                   8a68216be6dee       kube-controller-manager-addons-001438
	bfff5b2d37985       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             4 minutes ago        Running             kube-apiserver                           0                   81f095a38dae1       kube-apiserver-addons-001438
	5a4816dc33e76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             4 minutes ago        Running             kube-scheduler                           0                   ec134844260ab       kube-scheduler-addons-001438
	
	
	==> coredns [63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce] <==
	[INFO] 127.0.0.1:32820 - 49588 "HINFO IN 5683833228926934535.5808779734602365342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027869673s
	[INFO] 10.244.0.7:47242 - 15842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000350783s
	[INFO] 10.244.0.7:47242 - 29412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155576s
	[INFO] 10.244.0.7:51495 - 23321 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115255s
	[INFO] 10.244.0.7:51495 - 47135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085371s
	[INFO] 10.244.0.7:40689 - 10301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114089s
	[INFO] 10.244.0.7:40689 - 30779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011843s
	[INFO] 10.244.0.7:53526 - 19539 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127604s
	[INFO] 10.244.0.7:53526 - 34381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109337s
	[INFO] 10.244.0.7:39182 - 43658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075802s
	[INFO] 10.244.0.7:39182 - 55433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031766s
	[INFO] 10.244.0.7:52628 - 35000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037386s
	[INFO] 10.244.0.7:52628 - 44218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027585s
	[INFO] 10.244.0.7:47656 - 61837 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028204s
	[INFO] 10.244.0.7:47656 - 39571 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027731s
	[INFO] 10.244.0.7:53964 - 36235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098663s
	[INFO] 10.244.0.7:53964 - 55690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045022s
	[INFO] 10.244.0.22:49146 - 11336 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543634s
	[INFO] 10.244.0.22:44900 - 51750 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125434s
	[INFO] 10.244.0.22:47266 - 27362 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158517s
	[INFO] 10.244.0.22:53077 - 63050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068888s
	[INFO] 10.244.0.22:52796 - 34381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101059s
	[INFO] 10.244.0.22:52167 - 15594 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126468s
	[INFO] 10.244.0.22:42107 - 54869 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149176s
	[INFO] 10.244.0.22:60865 - 20616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006078154s
	
	
	==> describe nodes <==
	Name:               addons-001438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-001438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-001438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-001438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-001438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-001438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-001438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b69a913a20a4259950d0bf801229c28
	  System UUID:                8b69a913-a20a-4259-950d-0bf801229c28
	  Boot ID:                    7d21de27-dd4e-4002-9fc0-df14a0ff761f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-58ll2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  gadget                      gadget-k7c7v                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  gcp-auth                    gcp-auth-89d5ffd79-jg5wz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jhd4w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m21s
	  kube-system                 coredns-7c65d6cfc9-j5ndn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m28s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 csi-hostpathplugin-xgk62                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 etcd-addons-001438                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m34s
	  kube-system                 kube-apiserver-addons-001438                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-controller-manager-addons-001438       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-66flj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-addons-001438                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 metrics-server-84c5f94fbc-9hj9f             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m23s
	  kube-system                 snapshot-controller-56fcc65765-8nq94        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 snapshot-controller-56fcc65765-pv2sr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  local-path-storage          local-path-provisioner-86d989889c-rj67m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jnpkm              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m25s  kube-proxy       
	  Normal  Starting                 4m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m33s  kubelet          Node addons-001438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s  kubelet          Node addons-001438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s  kubelet          Node addons-001438 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m32s  kubelet          Node addons-001438 status is now: NodeReady
	  Normal  RegisteredNode           4m29s  node-controller  Node addons-001438 event: Registered Node addons-001438 in Controller
	
	
	==> dmesg <==
	[  +0.116289] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.270363] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.002627] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.196359] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +0.061696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999876] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.091472] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.774952] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +1.497885] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.466780] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.018877] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.254117] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.876932] kauditd_printk_skb: 7 callbacks suppressed
	[ +33.888489] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.263650] kauditd_printk_skb: 76 callbacks suppressed
	[ +48.109785] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 10:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.297596] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.818881] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.121137] kauditd_printk_skb: 19 callbacks suppressed
	[ +29.616490] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.276540] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84] <==
	{"level":"info","ts":"2024-09-16T10:25:01.423722Z","caller":"traceutil/trace.go:171","msg":"trace[1526018823] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"284.258855ms","start":"2024-09-16T10:25:01.139452Z","end":"2024-09-16T10:25:01.423711Z","steps":["trace[1526018823] 'process raft request'  (duration: 284.165558ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.424593Z","caller":"traceutil/trace.go:171","msg":"trace[1620023283] linearizableReadLoop","detail":"{readStateIndex:1296; appliedIndex:1296; }","duration":"253.838283ms","start":"2024-09-16T10:25:01.170745Z","end":"2024-09-16T10:25:01.424583Z","steps":["trace[1620023283] 'read index received'  (duration: 253.835456ms)","trace[1620023283] 'applied index is now lower than readState.Index'  (duration: 2.263µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:01.424681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.948565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.424719Z","caller":"traceutil/trace.go:171","msg":"trace[1658095100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"253.992891ms","start":"2024-09-16T10:25:01.170719Z","end":"2024-09-16T10:25:01.424712Z","steps":["trace[1658095100] 'agreement among raft nodes before linearized reading'  (duration: 253.933158ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.430878Z","caller":"traceutil/trace.go:171","msg":"trace[196824448] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"219.615242ms","start":"2024-09-16T10:25:01.211190Z","end":"2024-09-16T10:25:01.430805Z","steps":["trace[196824448] 'process raft request'  (duration: 217.799649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.432286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.218738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.432549Z","caller":"traceutil/trace.go:171","msg":"trace[1250515915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"248.433899ms","start":"2024-09-16T10:25:01.183901Z","end":"2024-09-16T10:25:01.432335Z","steps":["trace[1250515915] 'agreement among raft nodes before linearized reading'  (duration: 246.789324ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917472Z","caller":"traceutil/trace.go:171","msg":"trace[1132617141] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"256.411132ms","start":"2024-09-16T10:25:03.661047Z","end":"2024-09-16T10:25:03.917458Z","steps":["trace[1132617141] 'read index received'  (duration: 256.216658ms)","trace[1132617141] 'applied index is now lower than readState.Index'  (duration: 193.873µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:03.917646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.564415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917689Z","caller":"traceutil/trace.go:171","msg":"trace[1681803764] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1254; }","duration":"256.635309ms","start":"2024-09-16T10:25:03.661043Z","end":"2024-09-16T10:25:03.917678Z","steps":["trace[1681803764] 'agreement among raft nodes before linearized reading'  (duration: 256.524591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.498369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917721Z","caller":"traceutil/trace.go:171","msg":"trace[320039730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"246.52737ms","start":"2024-09-16T10:25:03.671187Z","end":"2024-09-16T10:25:03.917715Z","steps":["trace[320039730] 'agreement among raft nodes before linearized reading'  (duration: 246.484981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.395252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917834Z","caller":"traceutil/trace.go:171","msg":"trace[699037525] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"461.96825ms","start":"2024-09-16T10:25:03.455860Z","end":"2024-09-16T10:25:03.917828Z","steps":["trace[699037525] 'process raft request'  (duration: 461.454179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917838Z","caller":"traceutil/trace.go:171","msg":"trace[618256897] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"234.40851ms","start":"2024-09-16T10:25:03.683425Z","end":"2024-09-16T10:25:03.917833Z","steps":["trace[618256897] 'agreement among raft nodes before linearized reading'  (duration: 234.386479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:03.455845Z","time spent":"462.003063ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.523876Z","caller":"traceutil/trace.go:171","msg":"trace[565706559] transaction","detail":"{read_only:false; response_revision:1399; number_of_response:1; }","duration":"393.956218ms","start":"2024-09-16T10:25:42.129887Z","end":"2024-09-16T10:25:42.523844Z","steps":["trace[565706559] 'process raft request'  (duration: 393.821788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.524080Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.129864Z","time spent":"394.119545ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1398 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.533976Z","caller":"traceutil/trace.go:171","msg":"trace[668376333] linearizableReadLoop","detail":"{readStateIndex:1459; appliedIndex:1458; }","duration":"302.69985ms","start":"2024-09-16T10:25:42.231262Z","end":"2024-09-16T10:25:42.533962Z","steps":["trace[668376333] 'read index received'  (duration: 293.491454ms)","trace[668376333] 'applied index is now lower than readState.Index'  (duration: 9.207628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:42.535969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.605451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-09-16T10:25:42.536065Z","caller":"traceutil/trace.go:171","msg":"trace[19888550] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1400; }","duration":"205.726154ms","start":"2024-09-16T10:25:42.330329Z","end":"2024-09-16T10:25:42.536056Z","steps":["trace[19888550] 'agreement among raft nodes before linearized reading'  (duration: 205.527055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.536191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.924785ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:42.536244Z","caller":"traceutil/trace.go:171","msg":"trace[1740705082] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1400; }","duration":"304.971706ms","start":"2024-09-16T10:25:42.231257Z","end":"2024-09-16T10:25:42.536228Z","steps":["trace[1740705082] 'agreement among raft nodes before linearized reading'  (duration: 304.915956ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:42.537030Z","caller":"traceutil/trace.go:171","msg":"trace[778126279] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"337.225123ms","start":"2024-09-16T10:25:42.199749Z","end":"2024-09-16T10:25:42.536974Z","steps":["trace[778126279] 'process raft request'  (duration: 333.931313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.537228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.199733Z","time spent":"337.391985ms","remote":"127.0.0.1:51498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-001438\" mod_revision:1384 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-001438\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-001438\" > >"}
	
	
	==> gcp-auth [c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7] <==
	2024/09/16 10:25:06 GCP Auth Webhook started!
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	
	
	==> kernel <==
	 10:27:01 up 5 min,  0 users,  load average: 0.77, 0.91, 0.47
	Linux addons-001438 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77] <==
	I0916 10:22:40.795031       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.108.13.142"}
	I0916 10:22:40.844880       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.102.39.17"}
	I0916 10:22:40.932409       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0916 10:22:42.426039       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.146.100"}
	I0916 10:22:42.456409       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:22:42.660969       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.102.193"}
	I0916 10:22:44.945009       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.134.141"}
	W0916 10:23:38.948410       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.948711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:23:38.949896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:23:38.958493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.958543       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 10:23:38.959752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0916 10:24:18.395108       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:18.395300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:18.395442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 10:24:18.398244       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	I0916 10:24:18.453414       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:25:09.633337       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.80.80"}
	
	
	==> kube-controller-manager [2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3] <==
	I0916 10:24:53.864819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="57.647µs"
	I0916 10:25:01.430275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:25:04.459017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="97.149µs"
	I0916 10:25:06.488269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.118642ms"
	I0916 10:25:06.489287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="42.711µs"
	I0916 10:25:07.863123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="72.138µs"
	I0916 10:25:09.687063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="25.765664ms"
	E0916 10:25:09.687144       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0916 10:25:09.731163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.235103ms"
	I0916 10:25:09.753608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="22.282725ms"
	I0916 10:25:09.753862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="122.927µs"
	I0916 10:25:09.762905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.16µs"
	I0916 10:25:16.878158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="16.26286ms"
	I0916 10:25:16.878254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="50.754µs"
	I0916 10:25:19.390322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.132µs"
	I0916 10:25:32.259505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:25:42.895965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="3.388638ms"
	I0916 10:25:42.934221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="14.56657ms"
	I0916 10:25:42.935951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="80.433µs"
	I0916 10:25:50.249420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="66.204µs"
	I0916 10:25:52.859393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="64.229µs"
	I0916 10:26:00.384466       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0916 10:26:02.877788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:26:05.861778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="51.109µs"
	I0916 10:27:00.169838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="5.547µs"
	
	
	==> kube-proxy [60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:22:35.282699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:22:35.409784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0916 10:22:35.409847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:22:36.135283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:22:36.135476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:22:36.135545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:22:36.146626       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:22:36.146849       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:22:36.146861       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:22:36.156579       1 config.go:199] "Starting service config controller"
	I0916 10:22:36.156604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:22:36.166809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:22:36.166838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:22:36.168180       1 config.go:328] "Starting node config controller"
	I0916 10:22:36.168189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:22:36.258515       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:22:36.268518       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:22:36.268639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237] <==
	W0916 10:22:25.363221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:25.363254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:22:25.363420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:22:25.363573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:22:25.363425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:25.363941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.174422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:22:26.174473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.225213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:26.225308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.333904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:22:26.333957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.350221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:22:26.350326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.406843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:26.406982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.446248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:22:26.446395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.547116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:22:26.547206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.704254       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:22:26.704303       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:22:28.953769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:26:28 addons-001438 kubelet[1200]: E0916 10:26:28.142484    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482388141952351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:36 addons-001438 kubelet[1200]: I0916 10:26:36.839423    1200 scope.go:117] "RemoveContainer" containerID="2a9fb3cbc254187b99a934d47f8ee9fa5bde5ffb2f1bfb54562c87bfb44d4626"
	Sep 16 10:26:38 addons-001438 kubelet[1200]: E0916 10:26:38.145655    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482398145093209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:38 addons-001438 kubelet[1200]: E0916 10:26:38.145702    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482398145093209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:48 addons-001438 kubelet[1200]: E0916 10:26:48.148465    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482408148059396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:48 addons-001438 kubelet[1200]: E0916 10:26:48.148505    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482408148059396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:51 addons-001438 kubelet[1200]: E0916 10:26:51.522071    1200 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 16 10:26:51 addons-001438 kubelet[1200]: E0916 10:26:51.522458    1200 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 16 10:26:51 addons-001438 kubelet[1200]: E0916 10:26:51.522932    1200 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:yakd,Image:docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KUBERNETES_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HOSTNAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{memory: {{268435456 0} {<nil>}  BinarySI},},Requests:ResourceList{memory: {{134217728 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{Vo
lumeMount{Name:kube-api-access-vp6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*false,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,All
owPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod yakd-dashboard-67d98fc6b-jnpkm_yakd-dashboard(7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981): ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 16 10:26:51 addons-001438 kubelet[1200]: E0916 10:26:51.524522    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ErrImagePull: \"reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:26:51 addons-001438 kubelet[1200]: I0916 10:26:51.839641    1200 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-j5ndn" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:26:53 addons-001438 kubelet[1200]: I0916 10:26:53.338041    1200 scope.go:117] "RemoveContainer" containerID="2a9fb3cbc254187b99a934d47f8ee9fa5bde5ffb2f1bfb54562c87bfb44d4626"
	Sep 16 10:26:53 addons-001438 kubelet[1200]: I0916 10:26:53.338449    1200 scope.go:117] "RemoveContainer" containerID="44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f"
	Sep 16 10:26:53 addons-001438 kubelet[1200]: E0916 10:26:53.338595    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-k7c7v_gadget(fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a)\"" pod="gadget/gadget-k7c7v" podUID="fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"
	Sep 16 10:26:54 addons-001438 kubelet[1200]: I0916 10:26:54.345778    1200 scope.go:117] "RemoveContainer" containerID="44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f"
	Sep 16 10:26:54 addons-001438 kubelet[1200]: E0916 10:26:54.345955    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-k7c7v_gadget(fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a)\"" pod="gadget/gadget-k7c7v" podUID="fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"
	Sep 16 10:26:55 addons-001438 kubelet[1200]: I0916 10:26:55.791839    1200 scope.go:117] "RemoveContainer" containerID="44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f"
	Sep 16 10:26:55 addons-001438 kubelet[1200]: E0916 10:26:55.792027    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-k7c7v_gadget(fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a)\"" pod="gadget/gadget-k7c7v" podUID="fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"
	Sep 16 10:26:58 addons-001438 kubelet[1200]: E0916 10:26:58.153031    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482418152229825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:26:58 addons-001438 kubelet[1200]: E0916 10:26:58.153075    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482418152229825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:00 addons-001438 kubelet[1200]: I0916 10:27:00.618609    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rjq2\" (UniqueName: \"kubernetes.io/projected/a96b112c-4171-4416-9e14-ac1f69fd033e-kube-api-access-8rjq2\") pod \"a96b112c-4171-4416-9e14-ac1f69fd033e\" (UID: \"a96b112c-4171-4416-9e14-ac1f69fd033e\") "
	Sep 16 10:27:00 addons-001438 kubelet[1200]: I0916 10:27:00.620842    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96b112c-4171-4416-9e14-ac1f69fd033e-kube-api-access-8rjq2" (OuterVolumeSpecName: "kube-api-access-8rjq2") pod "a96b112c-4171-4416-9e14-ac1f69fd033e" (UID: "a96b112c-4171-4416-9e14-ac1f69fd033e"). InnerVolumeSpecName "kube-api-access-8rjq2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:27:00 addons-001438 kubelet[1200]: I0916 10:27:00.720040    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8rjq2\" (UniqueName: \"kubernetes.io/projected/a96b112c-4171-4416-9e14-ac1f69fd033e-kube-api-access-8rjq2\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:01 addons-001438 kubelet[1200]: I0916 10:27:01.393300    1200 scope.go:117] "RemoveContainer" containerID="a0731c5d88d35f1d8b6c88fee881cced713fd9e6231df44c4f03289b577fa75a"
	Sep 16 10:27:01 addons-001438 kubelet[1200]: I0916 10:27:01.844086    1200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96b112c-4171-4416-9e14-ac1f69fd033e" path="/var/lib/kubelet/pods/a96b112c-4171-4416-9e14-ac1f69fd033e/volumes"
	
	
	==> storage-provisioner [20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e] <==
	I0916 10:22:41.307950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:22:41.369058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:22:41.369154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:22:41.391597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:22:41.391782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	I0916 10:22:41.394290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b3cde4-08a8-47d7-a9cc-7251679ab4d1", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b became leader
	I0916 10:22:41.492688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (396.638µs)
helpers_test.go:263: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/HelmTiller (100.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (362.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 14.903377ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-001438 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:570: (dbg) Non-zero exit: kubectl --context addons-001438 create -f testdata/csi-hostpath-driver/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (396.955µs)
addons_test.go:572: creating sample PVC with kubectl --context addons-001438 create -f testdata/csi-hostpath-driver/pvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (292.312µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (384.931µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (410.956µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (389.112µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (383.853µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.793µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (362.22µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.912µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.693µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (381.36µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.692µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.778µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (381.781µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.222µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (348.688µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.377µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (388.342µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (383.517µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.167µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (354.735µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (378.224µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.154µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.445µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (410.533µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.468µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.518µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (377.449µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (422.313µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.121µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.386µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.444µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.584µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (371.669µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.731µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (14.568822ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.757µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (376.843µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.272µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (368.241µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.234µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.361µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.916µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (370.217µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (363.995µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.824µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.04µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (378.362µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (381.838µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.113µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (392.914µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (393.243µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (382.175µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.621µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.093µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (368.95µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.78µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.431µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.759µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (397.058µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.032µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.498µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (366.803µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.468µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (356.856µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (427.901µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.242µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.567µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.072µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.857µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.37µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (398.827µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.717µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.464µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.864µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.205µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (373.288µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.411µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (387.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.312µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.274µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (394.114µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (394.005µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.032µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.117µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.123µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.058µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.856µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.246µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (412.579µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (363.501µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (15.373194ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (412.115µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.738µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (416.248µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (389.685µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (400.086µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.548µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.155µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.837µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (410.427µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.927µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.757µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (402.445µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.077µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (379.062µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.784µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.242µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.864µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (398.228µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.626µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.064µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.399µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (380.315µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (399.612µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (882.616µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (394.282µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.466µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.949µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.229µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.121µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.48µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (401.492µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (511.162µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.718µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.057µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.493µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.827µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.877µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (415.457µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.913µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.716µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.285µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (422.798µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (386.374µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.934µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.995µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.052µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (380.465µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.888µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.782µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.802µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (400.656µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.348µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.292µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (402.598µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.353µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.712µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.759µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.07µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.968µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.614µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.738µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.071µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.243µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.299µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.889µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.796µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.737µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (391.598µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (415.882µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.156µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.235µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (427.56µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.138µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.782µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.961µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.326µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.475µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.029µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (389.678µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.017µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.03µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.667µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.655µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.779µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.263µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.178µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.783µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.946µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (416.732µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.347µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.217µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.164µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.297µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.567µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.515µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.714µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (422.339µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.107µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (410.171µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.1µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.561µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.507µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.424µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.095µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (402.141µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.682µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.617µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.922µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.543µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.618µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.5µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.453µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.981µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.517µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.184µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (23.266537ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.25µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.198µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.344µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.969µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.769µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.463µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.672µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.773µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.666µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.123µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.062µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.662µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.921µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.243µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.903µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.098µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.714µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (400.328µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (486.923µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.639µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.586µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.322µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.367µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (415.277µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.669µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.227µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.438µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.249µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.324µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.862µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.608µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.679µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.894µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (603.9µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.469µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.305µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.727µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.188µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.431µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.804µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.152µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.416µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.048µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.29µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.651µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.775µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.592µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (555.706µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.131µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.695µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (539.371µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.863µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.161µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.483µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.901µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.564µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.305µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.497µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.72µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (427.415µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.945µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.286µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (422.482µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.364µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.231µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (412.017µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.406µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.289µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.328µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.808µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.978µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (519.266µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.93µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.455µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.444µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.164µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.33µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.836µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.983µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.534µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (446.876µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.955µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (516.917µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.806µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (529.097µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.077µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.458µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.168µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.582µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (533.327µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.651µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.36µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.873µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.549µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.589µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.479µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (547.706µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.662µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (613.977µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.85µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.944µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.846µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (521.34µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.197µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.873µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (492.641µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.799µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.095µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (486.881µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.217µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.621µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.011µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.498µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.37µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.391µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.337µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.476µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.7µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.317µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.984µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (547.246µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.265µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.509µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.371µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (501.793µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.462µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.139µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.386µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.348µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.148µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.443µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.951µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.461µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (551.565µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.775µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.273µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.282µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.249µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.016µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.829µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.713µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.413µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.571µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.207µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.091µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (539.842µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.045µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (492.134µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.77µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.607µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-001438 get pvc hpvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.215µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: context deadline exceeded
addons_test.go:576: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-001438 -n addons-001438
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 logs -n 25: (1.330035676s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | --download-only -p                   | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-928489                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42715               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-928489              | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                  | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| start   | -p addons-001438 --wait=true         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	| ip      | addons-001438 ip                     | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | addons-001438 addons                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:42.990297   12265 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:42.990427   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990438   12265 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:42.990444   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990619   12265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:42.991237   12265 out.go:352] Setting JSON to false
	I0916 10:21:42.992075   12265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":253,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:42.992165   12265 start.go:139] virtualization: kvm guest
	I0916 10:21:42.994057   12265 out.go:177] * [addons-001438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:42.995363   12265 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:21:42.995366   12265 notify.go:220] Checking for updates...
	I0916 10:21:42.996620   12265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:42.997884   12265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:42.999244   12265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.000448   12265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:21:43.001744   12265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:21:43.003140   12265 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:43.035292   12265 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:21:43.036591   12265 start.go:297] selected driver: kvm2
	I0916 10:21:43.036604   12265 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:43.036617   12265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:21:43.037618   12265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.037687   12265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:43.052612   12265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:43.052654   12265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:43.052880   12265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:21:43.052910   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:21:43.052948   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:43.052956   12265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:43.053000   12265 start.go:340] cluster config:
	{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:43.053089   12265 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.054779   12265 out.go:177] * Starting "addons-001438" primary control-plane node in "addons-001438" cluster
	I0916 10:21:43.056048   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:21:43.056073   12265 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:43.056099   12265 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:43.056171   12265 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:21:43.056181   12265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:21:43.056464   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:21:43.056479   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json: {Name:mke7feffe145119f1110e818375562c2195d4fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:21:43.056601   12265 start.go:360] acquireMachinesLock for addons-001438: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:21:43.056638   12265 start.go:364] duration metric: took 25.099µs to acquireMachinesLock for "addons-001438"
	I0916 10:21:43.056653   12265 start.go:93] Provisioning new machine with config: &{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:21:43.056703   12265 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:21:43.058226   12265 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:21:43.058340   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:21:43.058376   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:21:43.072993   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0916 10:21:43.073475   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:21:43.073995   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:21:43.074020   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:21:43.074422   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:21:43.074620   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:21:43.074787   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:21:43.074946   12265 start.go:159] libmachine.API.Create for "addons-001438" (driver="kvm2")
	I0916 10:21:43.074989   12265 client.go:168] LocalClient.Create starting
	I0916 10:21:43.075021   12265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:21:43.311518   12265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:21:43.475888   12265 main.go:141] libmachine: Running pre-create checks...
	I0916 10:21:43.475917   12265 main.go:141] libmachine: (addons-001438) Calling .PreCreateCheck
	I0916 10:21:43.476396   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:21:43.476796   12265 main.go:141] libmachine: Creating machine...
	I0916 10:21:43.476809   12265 main.go:141] libmachine: (addons-001438) Calling .Create
	I0916 10:21:43.476954   12265 main.go:141] libmachine: (addons-001438) Creating KVM machine...
	I0916 10:21:43.478137   12265 main.go:141] libmachine: (addons-001438) DBG | found existing default KVM network
	I0916 10:21:43.478893   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.478751   12287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0916 10:21:43.478937   12265 main.go:141] libmachine: (addons-001438) DBG | created network xml: 
	I0916 10:21:43.478958   12265 main.go:141] libmachine: (addons-001438) DBG | <network>
	I0916 10:21:43.478967   12265 main.go:141] libmachine: (addons-001438) DBG |   <name>mk-addons-001438</name>
	I0916 10:21:43.478974   12265 main.go:141] libmachine: (addons-001438) DBG |   <dns enable='no'/>
	I0916 10:21:43.478986   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.478998   12265 main.go:141] libmachine: (addons-001438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:21:43.479006   12265 main.go:141] libmachine: (addons-001438) DBG |     <dhcp>
	I0916 10:21:43.479018   12265 main.go:141] libmachine: (addons-001438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:21:43.479026   12265 main.go:141] libmachine: (addons-001438) DBG |     </dhcp>
	I0916 10:21:43.479036   12265 main.go:141] libmachine: (addons-001438) DBG |   </ip>
	I0916 10:21:43.479087   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.479109   12265 main.go:141] libmachine: (addons-001438) DBG | </network>
	I0916 10:21:43.479150   12265 main.go:141] libmachine: (addons-001438) DBG | 
	I0916 10:21:43.484546   12265 main.go:141] libmachine: (addons-001438) DBG | trying to create private KVM network mk-addons-001438 192.168.39.0/24...
	I0916 10:21:43.547822   12265 main.go:141] libmachine: (addons-001438) DBG | private KVM network mk-addons-001438 192.168.39.0/24 created
	I0916 10:21:43.547845   12265 main.go:141] libmachine: (addons-001438) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.547862   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.547813   12287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.547875   12265 main.go:141] libmachine: (addons-001438) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:43.547936   12265 main.go:141] libmachine: (addons-001438) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:21:43.797047   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.796916   12287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa...
	I0916 10:21:43.906021   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.905909   12287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk...
	I0916 10:21:43.906051   12265 main.go:141] libmachine: (addons-001438) DBG | Writing magic tar header
	I0916 10:21:43.906060   12265 main.go:141] libmachine: (addons-001438) DBG | Writing SSH key tar header
	I0916 10:21:43.906067   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.906027   12287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.906123   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438
	I0916 10:21:43.906172   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 (perms=drwx------)
	I0916 10:21:43.906194   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:21:43.906204   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:21:43.906222   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:21:43.906230   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.906236   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:21:43.906243   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:21:43.906248   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:21:43.906258   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:43.906264   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:21:43.906275   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:21:43.906309   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:21:43.906325   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home
	I0916 10:21:43.906338   12265 main.go:141] libmachine: (addons-001438) DBG | Skipping /home - not owner
	I0916 10:21:43.907204   12265 main.go:141] libmachine: (addons-001438) define libvirt domain using xml: 
	I0916 10:21:43.907223   12265 main.go:141] libmachine: (addons-001438) <domain type='kvm'>
	I0916 10:21:43.907235   12265 main.go:141] libmachine: (addons-001438)   <name>addons-001438</name>
	I0916 10:21:43.907246   12265 main.go:141] libmachine: (addons-001438)   <memory unit='MiB'>4000</memory>
	I0916 10:21:43.907255   12265 main.go:141] libmachine: (addons-001438)   <vcpu>2</vcpu>
	I0916 10:21:43.907265   12265 main.go:141] libmachine: (addons-001438)   <features>
	I0916 10:21:43.907274   12265 main.go:141] libmachine: (addons-001438)     <acpi/>
	I0916 10:21:43.907282   12265 main.go:141] libmachine: (addons-001438)     <apic/>
	I0916 10:21:43.907294   12265 main.go:141] libmachine: (addons-001438)     <pae/>
	I0916 10:21:43.907307   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907318   12265 main.go:141] libmachine: (addons-001438)   </features>
	I0916 10:21:43.907327   12265 main.go:141] libmachine: (addons-001438)   <cpu mode='host-passthrough'>
	I0916 10:21:43.907337   12265 main.go:141] libmachine: (addons-001438)   
	I0916 10:21:43.907349   12265 main.go:141] libmachine: (addons-001438)   </cpu>
	I0916 10:21:43.907364   12265 main.go:141] libmachine: (addons-001438)   <os>
	I0916 10:21:43.907373   12265 main.go:141] libmachine: (addons-001438)     <type>hvm</type>
	I0916 10:21:43.907383   12265 main.go:141] libmachine: (addons-001438)     <boot dev='cdrom'/>
	I0916 10:21:43.907392   12265 main.go:141] libmachine: (addons-001438)     <boot dev='hd'/>
	I0916 10:21:43.907402   12265 main.go:141] libmachine: (addons-001438)     <bootmenu enable='no'/>
	I0916 10:21:43.907415   12265 main.go:141] libmachine: (addons-001438)   </os>
	I0916 10:21:43.907427   12265 main.go:141] libmachine: (addons-001438)   <devices>
	I0916 10:21:43.907435   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='cdrom'>
	I0916 10:21:43.907452   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/boot2docker.iso'/>
	I0916 10:21:43.907463   12265 main.go:141] libmachine: (addons-001438)       <target dev='hdc' bus='scsi'/>
	I0916 10:21:43.907489   12265 main.go:141] libmachine: (addons-001438)       <readonly/>
	I0916 10:21:43.907508   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907518   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='disk'>
	I0916 10:21:43.907531   12265 main.go:141] libmachine: (addons-001438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:21:43.907547   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk'/>
	I0916 10:21:43.907558   12265 main.go:141] libmachine: (addons-001438)       <target dev='hda' bus='virtio'/>
	I0916 10:21:43.907568   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907583   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907595   12265 main.go:141] libmachine: (addons-001438)       <source network='mk-addons-001438'/>
	I0916 10:21:43.907606   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907616   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907624   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907634   12265 main.go:141] libmachine: (addons-001438)       <source network='default'/>
	I0916 10:21:43.907645   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907667   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907687   12265 main.go:141] libmachine: (addons-001438)     <serial type='pty'>
	I0916 10:21:43.907697   12265 main.go:141] libmachine: (addons-001438)       <target port='0'/>
	I0916 10:21:43.907706   12265 main.go:141] libmachine: (addons-001438)     </serial>
	I0916 10:21:43.907717   12265 main.go:141] libmachine: (addons-001438)     <console type='pty'>
	I0916 10:21:43.907735   12265 main.go:141] libmachine: (addons-001438)       <target type='serial' port='0'/>
	I0916 10:21:43.907745   12265 main.go:141] libmachine: (addons-001438)     </console>
	I0916 10:21:43.907758   12265 main.go:141] libmachine: (addons-001438)     <rng model='virtio'>
	I0916 10:21:43.907772   12265 main.go:141] libmachine: (addons-001438)       <backend model='random'>/dev/random</backend>
	I0916 10:21:43.907777   12265 main.go:141] libmachine: (addons-001438)     </rng>
	I0916 10:21:43.907785   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907794   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907804   12265 main.go:141] libmachine: (addons-001438)   </devices>
	I0916 10:21:43.907814   12265 main.go:141] libmachine: (addons-001438) </domain>
	I0916 10:21:43.907826   12265 main.go:141] libmachine: (addons-001438) 
	I0916 10:21:43.913322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:98:e7:17 in network default
	I0916 10:21:43.913924   12265 main.go:141] libmachine: (addons-001438) Ensuring networks are active...
	I0916 10:21:43.913942   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:43.914588   12265 main.go:141] libmachine: (addons-001438) Ensuring network default is active
	I0916 10:21:43.914879   12265 main.go:141] libmachine: (addons-001438) Ensuring network mk-addons-001438 is active
	I0916 10:21:43.915337   12265 main.go:141] libmachine: (addons-001438) Getting domain xml...
	I0916 10:21:43.915987   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:45.289678   12265 main.go:141] libmachine: (addons-001438) Waiting to get IP...
	I0916 10:21:45.290387   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.290811   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.290836   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.290776   12287 retry.go:31] will retry after 253.823507ms: waiting for machine to come up
	I0916 10:21:45.546308   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.546737   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.546757   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.546713   12287 retry.go:31] will retry after 316.98215ms: waiting for machine to come up
	I0916 10:21:45.865275   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.865712   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.865742   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.865673   12287 retry.go:31] will retry after 438.875906ms: waiting for machine to come up
	I0916 10:21:46.306361   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.306829   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.306854   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.306787   12287 retry.go:31] will retry after 378.922529ms: waiting for machine to come up
	I0916 10:21:46.687272   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.687683   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.687718   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.687648   12287 retry.go:31] will retry after 695.664658ms: waiting for machine to come up
	I0916 10:21:47.384623   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:47.385017   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:47.385044   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:47.384985   12287 retry.go:31] will retry after 669.1436ms: waiting for machine to come up
	I0916 10:21:48.056603   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.057159   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.057183   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.057099   12287 retry.go:31] will retry after 739.217064ms: waiting for machine to come up
	I0916 10:21:48.798348   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.798788   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.798824   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.798748   12287 retry.go:31] will retry after 963.828739ms: waiting for machine to come up
	I0916 10:21:49.763677   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:49.764095   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:49.764120   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:49.764043   12287 retry.go:31] will retry after 1.625531991s: waiting for machine to come up
	I0916 10:21:51.391980   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:51.392322   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:51.392343   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:51.392285   12287 retry.go:31] will retry after 1.960554167s: waiting for machine to come up
	I0916 10:21:53.354469   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:53.354989   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:53.355016   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:53.354937   12287 retry.go:31] will retry after 2.035806393s: waiting for machine to come up
	I0916 10:21:55.393065   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:55.393432   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:55.393451   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:55.393400   12287 retry.go:31] will retry after 3.028756428s: waiting for machine to come up
	I0916 10:21:58.424174   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:58.424544   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:58.424577   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:58.424517   12287 retry.go:31] will retry after 3.769682763s: waiting for machine to come up
	I0916 10:22:02.198084   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:02.198470   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:22:02.198492   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:22:02.198430   12287 retry.go:31] will retry after 5.547519077s: waiting for machine to come up
	I0916 10:22:07.750830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751191   12265 main.go:141] libmachine: (addons-001438) Found IP for machine: 192.168.39.72
	I0916 10:22:07.751209   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has current primary IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751215   12265 main.go:141] libmachine: (addons-001438) Reserving static IP address...
	I0916 10:22:07.751548   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "addons-001438", mac: "52:54:00:9c:55:19", ip: "192.168.39.72"} in network mk-addons-001438
	I0916 10:22:07.821469   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:07.821506   12265 main.go:141] libmachine: (addons-001438) Reserved static IP address: 192.168.39.72
	I0916 10:22:07.821523   12265 main.go:141] libmachine: (addons-001438) Waiting for SSH to be available...
	I0916 10:22:07.823797   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.824029   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438
	I0916 10:22:07.824057   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find defined IP address of network mk-addons-001438 interface with MAC address 52:54:00:9c:55:19
	I0916 10:22:07.824199   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:07.824226   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:07.824261   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:07.824273   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:07.824297   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:07.835394   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: exit status 255: 
	I0916 10:22:07.835415   12265 main.go:141] libmachine: (addons-001438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 10:22:07.835421   12265 main.go:141] libmachine: (addons-001438) DBG | command : exit 0
	I0916 10:22:07.835428   12265 main.go:141] libmachine: (addons-001438) DBG | err     : exit status 255
	I0916 10:22:07.835435   12265 main.go:141] libmachine: (addons-001438) DBG | output  : 
	I0916 10:22:10.838181   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:10.840410   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840805   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.840830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840953   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:10.840980   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:10.841012   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:10.841026   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:10.841039   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:10.969218   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: <nil>: 
	I0916 10:22:10.969498   12265 main.go:141] libmachine: (addons-001438) KVM machine creation complete!
	I0916 10:22:10.969791   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:10.970351   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970568   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970704   12265 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:22:10.970716   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:10.971844   12265 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:22:10.971857   12265 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:22:10.971863   12265 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:22:10.971871   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:10.973963   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974287   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.974322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974443   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:10.974600   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974766   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974897   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:10.975056   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:10.975258   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:10.975270   12265 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:22:11.084303   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.084322   12265 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:22:11.084329   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.086985   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087399   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.087449   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087637   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.087805   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.087957   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.088052   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.088212   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.088404   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.088420   12265 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:22:11.197622   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:22:11.197666   12265 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:22:11.197674   12265 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:22:11.197683   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.197922   12265 buildroot.go:166] provisioning hostname "addons-001438"
	I0916 10:22:11.197936   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.198131   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.200614   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.200955   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.200988   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.201100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.201269   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201396   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201536   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.201681   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.201878   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.201891   12265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-001438 && echo "addons-001438" | sudo tee /etc/hostname
	I0916 10:22:11.329393   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-001438
	
	I0916 10:22:11.329423   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.332085   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332370   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.332397   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332557   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.332746   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332868   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332999   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.333118   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.333336   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.333353   12265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-001438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-001438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-001438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:22:11.454462   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.454486   12265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:22:11.454539   12265 buildroot.go:174] setting up certificates
	I0916 10:22:11.454553   12265 provision.go:84] configureAuth start
	I0916 10:22:11.454562   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.454823   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:11.457458   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.457872   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.457902   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.458065   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.460166   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460456   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.460484   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460579   12265 provision.go:143] copyHostCerts
	I0916 10:22:11.460674   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:22:11.460835   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:22:11.460925   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:22:11.460997   12265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.addons-001438 san=[127.0.0.1 192.168.39.72 addons-001438 localhost minikube]
	I0916 10:22:11.639072   12265 provision.go:177] copyRemoteCerts
	I0916 10:22:11.639141   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:22:11.639169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.641767   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642050   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.642076   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642240   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.642415   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.642519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.642635   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:11.727509   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:22:11.752436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:22:11.776436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:22:11.799597   12265 provision.go:87] duration metric: took 345.032702ms to configureAuth
	I0916 10:22:11.799626   12265 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:22:11.799813   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:11.799904   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.802386   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.802700   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802854   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.803047   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803187   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803323   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.803504   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.803689   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.803704   12265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:22:12.030350   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:22:12.030374   12265 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:22:12.030382   12265 main.go:141] libmachine: (addons-001438) Calling .GetURL
	I0916 10:22:12.031607   12265 main.go:141] libmachine: (addons-001438) DBG | Using libvirt version 6000000
	I0916 10:22:12.034008   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034296   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.034325   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034451   12265 main.go:141] libmachine: Docker is up and running!
	I0916 10:22:12.034463   12265 main.go:141] libmachine: Reticulating splines...
	I0916 10:22:12.034470   12265 client.go:171] duration metric: took 28.959474569s to LocalClient.Create
	I0916 10:22:12.034491   12265 start.go:167] duration metric: took 28.959547297s to libmachine.API.Create "addons-001438"
	I0916 10:22:12.034500   12265 start.go:293] postStartSetup for "addons-001438" (driver="kvm2")
	I0916 10:22:12.034509   12265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:22:12.034535   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.034731   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:22:12.034762   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.036747   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037041   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.037068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037200   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.037344   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.037486   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.037623   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.123403   12265 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:22:12.127815   12265 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:22:12.127838   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:22:12.127904   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:22:12.127926   12265 start.go:296] duration metric: took 93.420957ms for postStartSetup
	I0916 10:22:12.127955   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:12.128519   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.131232   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131510   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.131547   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131776   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:22:12.131949   12265 start.go:128] duration metric: took 29.075237515s to createHost
	I0916 10:22:12.131975   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.133967   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134281   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.134305   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134418   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.134606   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134753   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134877   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.135036   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:12.135185   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:12.135202   12265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:22:12.245734   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482132.226578519
	
	I0916 10:22:12.245757   12265 fix.go:216] guest clock: 1726482132.226578519
	I0916 10:22:12.245764   12265 fix.go:229] Guest: 2024-09-16 10:22:12.226578519 +0000 UTC Remote: 2024-09-16 10:22:12.131960304 +0000 UTC m=+29.174301435 (delta=94.618215ms)
	I0916 10:22:12.245784   12265 fix.go:200] guest clock delta is within tolerance: 94.618215ms
	I0916 10:22:12.245790   12265 start.go:83] releasing machines lock for "addons-001438", held for 29.189143417s
	I0916 10:22:12.245809   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.246014   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.248419   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248678   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.248704   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248832   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249314   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249485   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249586   12265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:22:12.249653   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.249707   12265 ssh_runner.go:195] Run: cat /version.json
	I0916 10:22:12.249728   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.252249   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252497   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252634   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252657   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252757   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.252904   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252922   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.252925   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.253038   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.253093   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253241   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.253258   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.253386   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253515   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.362639   12265 ssh_runner.go:195] Run: systemctl --version
	I0916 10:22:12.368512   12265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:22:12.527002   12265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:22:12.532733   12265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:22:12.532791   12265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:22:12.548743   12265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:22:12.548773   12265 start.go:495] detecting cgroup driver to use...
	I0916 10:22:12.548843   12265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:22:12.564219   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:22:12.578224   12265 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:22:12.578276   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:22:12.591434   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:22:12.604674   12265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:22:12.712713   12265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:22:12.868881   12265 docker.go:233] disabling docker service ...
	I0916 10:22:12.868945   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:22:12.883262   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:22:12.896034   12265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:22:13.009183   12265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:22:13.123591   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:22:13.137411   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:22:13.155768   12265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:22:13.155832   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.166378   12265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:22:13.166436   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.177199   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.187753   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.198460   12265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:22:13.209356   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.220222   12265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.237721   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.247992   12265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:22:13.257214   12265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:22:13.257274   12265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:22:13.269843   12265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:22:13.279361   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:13.392424   12265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:22:13.489919   12265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:22:13.490002   12265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:22:13.495269   12265 start.go:563] Will wait 60s for crictl version
	I0916 10:22:13.495342   12265 ssh_runner.go:195] Run: which crictl
	I0916 10:22:13.499375   12265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:22:13.543037   12265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:22:13.543161   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.571422   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.600892   12265 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:22:13.602164   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:13.604725   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605053   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:13.605090   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605239   12265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:22:13.609153   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:13.621451   12265 kubeadm.go:883] updating cluster {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:22:13.621560   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:13.621616   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:13.653616   12265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:22:13.653695   12265 ssh_runner.go:195] Run: which lz4
	I0916 10:22:13.657722   12265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:22:13.661843   12265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:22:13.661873   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:22:14.968986   12265 crio.go:462] duration metric: took 1.311298771s to copy over tarball
	I0916 10:22:14.969053   12265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:22:17.073836   12265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104757919s)
	I0916 10:22:17.073872   12265 crio.go:469] duration metric: took 2.104858266s to extract the tarball
	I0916 10:22:17.073881   12265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:22:17.110316   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:17.150207   12265 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:22:17.150233   12265 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:22:17.150241   12265 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.1 crio true true} ...
	I0916 10:22:17.150343   12265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-001438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:22:17.150424   12265 ssh_runner.go:195] Run: crio config
	I0916 10:22:17.195725   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:17.195746   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:17.195756   12265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:22:17.195774   12265 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-001438 NodeName:addons-001438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:22:17.195915   12265 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-001438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:22:17.195969   12265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:22:17.206079   12265 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:22:17.206139   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:22:17.215719   12265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 10:22:17.232125   12265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:22:17.248126   12265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:22:17.264165   12265 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0916 10:22:17.267727   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:17.279787   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:17.393283   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:17.410756   12265 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438 for IP: 192.168.39.72
	I0916 10:22:17.410774   12265 certs.go:194] generating shared ca certs ...
	I0916 10:22:17.410794   12265 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.410949   12265 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:22:17.480758   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt ...
	I0916 10:22:17.480787   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt: {Name:mkc291c3a986acc7f4de9183c4ef6d249d8de5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.480965   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key ...
	I0916 10:22:17.480980   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key: {Name:mk56bc8b146d891ba5f741ad0bd339fffdb85989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.481075   12265 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:22:17.673219   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt ...
	I0916 10:22:17.673250   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt: {Name:mk8d6878492eab0d99f630fc495324e3b843781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673403   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key ...
	I0916 10:22:17.673414   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key: {Name:mk082b50320d253da8f01ad2454b69492e000fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673482   12265 certs.go:256] generating profile certs ...
	I0916 10:22:17.673531   12265 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key
	I0916 10:22:17.673544   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt with IP's: []
	I0916 10:22:17.921779   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt ...
	I0916 10:22:17.921811   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: {Name:mk9172b9e8f20da0dd399e583d4f0391784c25bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.921970   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key ...
	I0916 10:22:17.921981   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key: {Name:mk65d84f1710f9ab616402324cb2a91f749aa3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.922048   12265 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03
	I0916 10:22:17.922066   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0916 10:22:17.984449   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 ...
	I0916 10:22:17.984473   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03: {Name:mk697c0092db030ad4df50333f6d1db035d298e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984627   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 ...
	I0916 10:22:17.984638   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03: {Name:mkf74035add612ea1883fde9b662a919a8d7c5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984705   12265 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt
	I0916 10:22:17.984774   12265 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key
	I0916 10:22:17.984818   12265 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key
	I0916 10:22:17.984834   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt with IP's: []
	I0916 10:22:18.105094   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt ...
	I0916 10:22:18.105122   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt: {Name:mk12379583893d02aa599284bf7c2e673e4a585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105290   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key ...
	I0916 10:22:18.105300   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key: {Name:mkddc10c89aa36609a41c940a83606fa36ac69df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105453   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:22:18.105484   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:22:18.105509   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:22:18.105531   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:22:18.106125   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:22:18.132592   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:22:18.173674   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:22:18.200455   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:22:18.223366   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:22:18.246242   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:22:18.269411   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:22:18.292157   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:22:18.314508   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:22:18.337365   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:22:18.353286   12265 ssh_runner.go:195] Run: openssl version
	I0916 10:22:18.358942   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:22:18.369103   12265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373299   12265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373346   12265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.378948   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:22:18.389436   12265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:22:18.393342   12265 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:22:18.393387   12265 kubeadm.go:392] StartCluster: {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:18.393452   12265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:22:18.393509   12265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:22:18.429056   12265 cri.go:89] found id: ""
	I0916 10:22:18.429118   12265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:22:18.439123   12265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:22:18.448797   12265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:22:18.458281   12265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:22:18.458303   12265 kubeadm.go:157] found existing configuration files:
	
	I0916 10:22:18.458357   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:22:18.467304   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:22:18.467373   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:22:18.476476   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:22:18.485402   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:22:18.485467   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:22:18.494643   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.503578   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:22:18.503657   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.512633   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:22:18.521391   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:22:18.521454   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:22:18.530381   12265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:22:18.584992   12265 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:22:18.585058   12265 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:22:18.700906   12265 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:22:18.701050   12265 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:22:18.701195   12265 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:22:18.712665   12265 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:22:18.808124   12265 out.go:235]   - Generating certificates and keys ...
	I0916 10:22:18.808238   12265 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:22:18.808308   12265 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:22:18.808390   12265 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:22:18.884612   12265 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:22:19.103481   12265 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:22:19.230175   12265 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:22:19.422850   12265 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:22:19.423077   12265 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.499430   12265 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:22:19.499746   12265 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.689533   12265 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:22:19.770560   12265 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:22:20.159783   12265 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:22:20.160053   12265 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:22:20.575897   12265 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:22:20.728566   12265 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:22:21.092038   12265 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:22:21.382957   12265 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:22:21.446452   12265 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:22:21.447068   12265 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:22:21.451577   12265 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:22:21.454426   12265 out.go:235]   - Booting up control plane ...
	I0916 10:22:21.454540   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:22:21.454614   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:22:21.454722   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:22:21.468531   12265 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:22:21.475700   12265 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:22:21.475767   12265 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:22:21.606009   12265 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:22:21.606143   12265 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:22:22.124369   12265 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.881759ms
	I0916 10:22:22.124492   12265 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:22:27.123389   12265 kubeadm.go:310] [api-check] The API server is healthy after 5.002163965s
	I0916 10:22:27.138636   12265 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:22:27.154171   12265 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:22:27.185604   12265 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:22:27.185839   12265 kubeadm.go:310] [mark-control-plane] Marking the node addons-001438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:22:27.198602   12265 kubeadm.go:310] [bootstrap-token] Using token: os1o8m.q16efzg2rjnkpln8
	I0916 10:22:27.199966   12265 out.go:235]   - Configuring RBAC rules ...
	I0916 10:22:27.200085   12265 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:22:27.209733   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:22:27.218630   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:22:27.222473   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:22:27.226151   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:22:27.230516   12265 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:22:27.529586   12265 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:22:27.967178   12265 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:22:28.529936   12265 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:22:28.529960   12265 kubeadm.go:310] 
	I0916 10:22:28.530028   12265 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:22:28.530044   12265 kubeadm.go:310] 
	I0916 10:22:28.530137   12265 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:22:28.530173   12265 kubeadm.go:310] 
	I0916 10:22:28.530227   12265 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:22:28.530307   12265 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:22:28.530390   12265 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:22:28.530397   12265 kubeadm.go:310] 
	I0916 10:22:28.530463   12265 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:22:28.530472   12265 kubeadm.go:310] 
	I0916 10:22:28.530525   12265 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:22:28.530537   12265 kubeadm.go:310] 
	I0916 10:22:28.530609   12265 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:22:28.530728   12265 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:22:28.530832   12265 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:22:28.530868   12265 kubeadm.go:310] 
	I0916 10:22:28.530981   12265 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:22:28.531080   12265 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:22:28.531091   12265 kubeadm.go:310] 
	I0916 10:22:28.531215   12265 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531358   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:22:28.531389   12265 kubeadm.go:310] 	--control-plane 
	I0916 10:22:28.531397   12265 kubeadm.go:310] 
	I0916 10:22:28.531518   12265 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:22:28.531528   12265 kubeadm.go:310] 
	I0916 10:22:28.531639   12265 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531783   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:22:28.532220   12265 kubeadm.go:310] W0916 10:22:18.568727     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532498   12265 kubeadm.go:310] W0916 10:22:18.569597     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532623   12265 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:22:28.532635   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.532642   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:28.534239   12265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:22:28.535682   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:22:28.547306   12265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:22:28.567029   12265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:22:28.567083   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:28.567116   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-001438 minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-001438 minikube.k8s.io/primary=true
	I0916 10:22:28.599898   12265 ops.go:34] apiserver oom_adj: -16
	I0916 10:22:28.718193   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.219097   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.718331   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.219213   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.718728   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.218997   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.719218   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.218543   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.335651   12265 kubeadm.go:1113] duration metric: took 3.768632423s to wait for elevateKubeSystemPrivileges
	I0916 10:22:32.335685   12265 kubeadm.go:394] duration metric: took 13.942299744s to StartCluster
	I0916 10:22:32.335709   12265 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.335851   12265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:22:32.336274   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.336491   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:22:32.336522   12265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:22:32.336653   12265 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:22:32.336724   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.336769   12265 addons.go:69] Setting default-storageclass=true in profile "addons-001438"
	I0916 10:22:32.336779   12265 addons.go:69] Setting ingress-dns=true in profile "addons-001438"
	I0916 10:22:32.336787   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-001438"
	I0916 10:22:32.336780   12265 addons.go:69] Setting ingress=true in profile "addons-001438"
	I0916 10:22:32.336793   12265 addons.go:69] Setting cloud-spanner=true in profile "addons-001438"
	I0916 10:22:32.336813   12265 addons.go:69] Setting inspektor-gadget=true in profile "addons-001438"
	I0916 10:22:32.336820   12265 addons.go:69] Setting gcp-auth=true in profile "addons-001438"
	I0916 10:22:32.336832   12265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-001438"
	I0916 10:22:32.336835   12265 addons.go:234] Setting addon cloud-spanner=true in "addons-001438"
	I0916 10:22:32.336828   12265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-001438"
	I0916 10:22:32.336844   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-001438"
	I0916 10:22:32.336825   12265 addons.go:234] Setting addon inspektor-gadget=true in "addons-001438"
	I0916 10:22:32.336853   12265 addons.go:69] Setting registry=true in profile "addons-001438"
	I0916 10:22:32.336867   12265 addons.go:234] Setting addon registry=true in "addons-001438"
	I0916 10:22:32.336883   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336888   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336896   12265 addons.go:69] Setting helm-tiller=true in profile "addons-001438"
	I0916 10:22:32.336908   12265 addons.go:234] Setting addon helm-tiller=true in "addons-001438"
	I0916 10:22:32.336937   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336940   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336844   12265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-001438"
	I0916 10:22:32.337250   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337262   12265 addons.go:69] Setting volcano=true in profile "addons-001438"
	I0916 10:22:32.337273   12265 addons.go:234] Setting addon volcano=true in "addons-001438"
	I0916 10:22:32.337281   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337295   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337315   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336808   12265 addons.go:234] Setting addon ingress=true in "addons-001438"
	I0916 10:22:32.337347   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337348   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337365   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337367   12265 addons.go:69] Setting volumesnapshots=true in profile "addons-001438"
	I0916 10:22:32.337379   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337381   12265 addons.go:234] Setting addon volumesnapshots=true in "addons-001438"
	I0916 10:22:32.337412   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336888   12265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:32.337442   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336769   12265 addons.go:69] Setting yakd=true in profile "addons-001438"
	I0916 10:22:32.337489   12265 addons.go:234] Setting addon yakd=true in "addons-001438"
	I0916 10:22:32.337633   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337660   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336835   12265 addons.go:69] Setting metrics-server=true in profile "addons-001438"
	I0916 10:22:32.337353   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337714   12265 addons.go:234] Setting addon metrics-server=true in "addons-001438"
	I0916 10:22:32.337741   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337700   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337795   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336844   12265 mustload.go:65] Loading cluster: addons-001438
	I0916 10:22:32.336824   12265 addons.go:69] Setting storage-provisioner=true in profile "addons-001438"
	I0916 10:22:32.337840   12265 addons.go:234] Setting addon storage-provisioner=true in "addons-001438"
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337881   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336804   12265 addons.go:234] Setting addon ingress-dns=true in "addons-001438"
	I0916 10:22:32.337251   12265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-001438"
	I0916 10:22:32.337944   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338072   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338099   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338127   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338301   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338331   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338413   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338421   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338448   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338455   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338446   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338765   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338792   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338818   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338845   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338995   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.339304   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.339363   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.342405   12265 out.go:177] * Verifying Kubernetes components...
	I0916 10:22:32.343665   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:32.357106   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0916 10:22:32.357444   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0916 10:22:32.357655   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0916 10:22:32.357797   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.357897   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358211   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358403   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358419   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358562   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358574   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358633   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0916 10:22:32.358790   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.358949   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358960   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.359007   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0916 10:22:32.369699   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.369748   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.369818   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370020   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370060   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370069   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370101   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370194   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370269   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370379   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.370390   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.370789   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370827   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370908   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370969   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.371094   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.371111   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.371475   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371508   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371573   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.371638   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371663   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371731   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.386697   12265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-001438"
	I0916 10:22:32.386747   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.386763   12265 addons.go:234] Setting addon default-storageclass=true in "addons-001438"
	I0916 10:22:32.386810   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.387114   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387173   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.387252   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387291   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.408433   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0916 10:22:32.409200   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.409836   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.409856   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.410249   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.410830   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.410872   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.411145   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0916 10:22:32.411578   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.413298   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.413319   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.414168   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0916 10:22:32.414190   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0916 10:22:32.414292   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0916 10:22:32.414570   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.414671   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.415178   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.415195   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.415681   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.416214   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.416252   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.416442   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0916 10:22:32.416592   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417197   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.417231   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.417415   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0916 10:22:32.417454   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417595   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.417608   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.417843   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417917   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418037   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.418050   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.418410   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.418443   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.418409   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418501   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.419031   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.419065   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.419266   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419281   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419404   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419414   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419702   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.419847   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.420545   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.421091   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.421133   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.421574   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.421979   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0916 10:22:32.422963   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.423382   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.423399   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.423697   12265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:22:32.423813   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.424320   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.424354   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.425846   12265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:22:32.425941   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 10:22:32.426062   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0916 10:22:32.426213   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0916 10:22:32.426381   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426757   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426931   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.426942   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.426976   12265 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:22:32.426992   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:22:32.427011   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.427391   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.427470   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.427489   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.427946   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.428354   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428385   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.428598   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.428889   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428924   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.429188   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.429202   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.429517   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.431934   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0916 10:22:32.431987   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432541   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.432563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432751   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.432883   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.432998   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.433120   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.433712   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.435531   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.435730   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435742   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.435888   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.435899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:32.435907   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435913   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.436070   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.436085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 10:22:32.436166   12265 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:22:32.440699   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0916 10:22:32.441072   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.441617   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.441644   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.441979   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.442497   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.442531   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.450769   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0916 10:22:32.451259   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.451700   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.451718   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.452549   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.453092   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.453146   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.454430   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0916 10:22:32.454743   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455061   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455149   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0916 10:22:32.455842   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455847   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455860   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455871   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455922   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.456243   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456542   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456622   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.456639   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.456747   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.457901   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0916 10:22:32.458037   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.458209   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.458254   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.458704   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.458721   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.459089   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.459296   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.459533   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.460121   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.460511   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.460545   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.460978   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0916 10:22:32.461180   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.461244   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.461735   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.461753   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.461805   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.462195   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0916 10:22:32.462331   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.462809   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.464034   12265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:22:32.464150   12265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:22:32.464278   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.464668   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.464696   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.465237   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.466010   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.465566   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0916 10:22:32.466246   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:22:32.466259   12265 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:22:32.466276   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467014   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.467145   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.467235   12265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:22:32.467359   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:22:32.467370   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:22:32.467385   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467696   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.467711   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.468100   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468326   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.468710   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:22:32.468725   12265 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:22:32.468742   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.468966   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:22:32.469146   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.469463   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.469917   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.469918   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.470000   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.470971   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0916 10:22:32.471473   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.471695   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.472001   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.472015   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.472269   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:22:32.472471   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472523   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0916 10:22:32.472664   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472783   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.472993   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.473106   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.473134   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.473329   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.473377   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.473597   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.473743   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.473790   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.473851   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.474147   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:32.474163   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:22:32.474178   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.474793   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.474941   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.474955   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.475234   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.475510   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.475619   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475650   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.475665   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475824   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.476100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.476264   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.476604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.476644   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.476828   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.476940   12265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:22:32.477612   12265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:22:32.478260   12265 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.478276   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:22:32.478291   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.478585   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.478604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.478880   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.479035   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.479168   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.479299   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.479921   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.479937   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:22:32.479951   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.480259   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.480742   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.481958   12265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:22:32.482834   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0916 10:22:32.482998   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483118   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483310   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.483473   12265 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:22:32.483494   12265 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:22:32.483512   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.483802   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.483828   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.483888   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483903   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483899   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483930   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.484092   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.484159   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484194   12265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:22:32.484411   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.484581   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.484636   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484681   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.484892   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.484958   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.485096   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.485218   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.485248   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.485262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.485372   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.485494   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.485505   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:22:32.485519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.485781   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.486028   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.486181   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.486318   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.487186   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487422   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.487675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.487695   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487742   12265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.487752   12265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:22:32.487764   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.487810   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.487995   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.488225   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.488378   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.489702   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490168   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.490188   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490394   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.490571   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.490713   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.490823   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.492068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492458   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.492479   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492686   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.492906   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.492915   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0916 10:22:32.493044   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.493239   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.493450   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.493933   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.493950   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.494562   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.494891   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.496932   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.498147   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0916 10:22:32.498828   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:22:32.499232   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.499608   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.499634   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.499936   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.500124   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.500215   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:22:32.500241   12265 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:22:32.500262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.501763   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.503323   12265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:22:32.503738   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504260   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.504287   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504422   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.504578   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.504721   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.504800   12265 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:32.504813   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:22:32.504828   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.504844   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.507073   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0916 10:22:32.507489   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.507971   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.507994   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.508014   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 10:22:32.508383   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.508455   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0916 10:22:32.508996   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.509012   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509054   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509082   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509517   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.509552   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.509551   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.509573   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509882   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510086   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.510151   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.510169   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.510570   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.510576   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510696   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.510739   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.510822   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.510947   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.511685   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.511711   12265 retry.go:31] will retry after 323.390168ms: ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.513110   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.513548   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.515216   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:22:32.516467   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:22:32.517228   12265 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:22:32.518463   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:22:32.519691   12265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:22:32.521193   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:22:32.521287   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:32.521309   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:22:32.521330   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.523957   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:22:32.524563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.524915   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.524939   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.525078   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.525271   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.525408   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.525548   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.526174   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526199   12265 retry.go:31] will retry after 208.869548ms: ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526327   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:22:32.527568   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:22:32.528811   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:22:32.530140   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:22:32.530154   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:22:32.530169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.533281   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533666   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.533688   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533886   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.534072   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.534227   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.534367   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.700911   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:32.700984   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:22:32.785482   12265 node_ready.go:35] waiting up to 6m0s for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822842   12265 node_ready.go:49] node "addons-001438" has status "Ready":"True"
	I0916 10:22:32.822881   12265 node_ready.go:38] duration metric: took 37.361645ms for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822895   12265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:32.861506   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:22:32.861543   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:22:32.862634   12265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:32.929832   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.943014   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.952437   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.991347   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.995067   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:22:32.995096   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:22:33.036627   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:22:33.036657   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:22:33.036890   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:33.060821   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:22:33.060843   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:22:33.069120   12265 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:22:33.069156   12265 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:22:33.070018   12265 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:22:33.070038   12265 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:22:33.073512   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:22:33.073535   12265 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:22:33.137058   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:22:33.137088   12265 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:22:33.226855   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.226884   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:22:33.270492   12265 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:22:33.270513   12265 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:22:33.316169   12265 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.316195   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:22:33.316355   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:22:33.316373   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:22:33.316509   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:22:33.316522   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:22:33.327110   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:22:33.327126   12265 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:22:33.354597   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.420390   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:33.435680   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:22:33.435717   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:22:33.439954   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:22:33.439978   12265 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:22:33.444981   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.445002   12265 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:22:33.522524   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:33.536060   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:22:33.536089   12265 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:22:33.569830   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.590335   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:22:33.590366   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:22:33.601121   12265 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:22:33.601154   12265 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:22:33.623197   12265 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.623219   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:22:33.629904   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.693404   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.693424   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:22:33.747802   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.761431   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:22:33.761461   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:22:33.774811   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:22:33.774845   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:22:33.825893   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.895859   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:22:33.895893   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:22:34.018321   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:22:34.018345   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:22:34.260751   12265 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:22:34.260776   12265 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:22:34.288705   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:22:34.288733   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:22:34.575904   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:22:34.575932   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:22:34.578707   12265 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:34.578728   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:22:34.872174   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:35.002110   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:22:35.002133   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:22:35.053333   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47211504s)
	I0916 10:22:35.173178   12265 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.243289168s)
	I0916 10:22:35.173338   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173355   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.173706   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:35.173723   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.173737   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.173751   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173762   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.174037   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.174053   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.219712   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.219745   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.220033   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.220084   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.326225   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:22:35.326245   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:22:35.667079   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:35.667102   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:22:35.677467   12265 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-001438" context rescaled to 1 replicas
	I0916 10:22:36.005922   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:36.880549   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:37.248962   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.296492058s)
	I0916 10:22:37.249022   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249036   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.306004364s)
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.257675255s)
	I0916 10:22:37.249138   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249160   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249084   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249221   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249330   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249355   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249374   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249434   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249460   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249476   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249440   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249499   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249529   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249541   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249485   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249593   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249655   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249676   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251028   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251216   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251214   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251232   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251278   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251288   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:38.978538   12265 pod_ready.go:93] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:38.978561   12265 pod_ready.go:82] duration metric: took 6.115904528s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:38.978572   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179661   12265 pod_ready.go:93] pod "kube-apiserver-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.179691   12265 pod_ready.go:82] duration metric: took 201.112317ms for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179705   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377607   12265 pod_ready.go:93] pod "kube-controller-manager-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.377640   12265 pod_ready.go:82] duration metric: took 197.926831ms for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377656   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509747   12265 pod_ready.go:93] pod "kube-proxy-66flj" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.509775   12265 pod_ready.go:82] duration metric: took 132.110984ms for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509789   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633441   12265 pod_ready.go:93] pod "kube-scheduler-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.633475   12265 pod_ready.go:82] duration metric: took 123.676997ms for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633487   12265 pod_ready.go:39] duration metric: took 6.810577473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:39.633508   12265 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:22:39.633572   12265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:22:39.633966   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:22:39.634003   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:39.637511   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638022   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:39.638050   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638265   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:39.638449   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:39.638594   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:39.638741   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:40.248183   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:22:40.342621   12265 addons.go:234] Setting addon gcp-auth=true in "addons-001438"
	I0916 10:22:40.342682   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:40.343054   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.343105   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.358807   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0916 10:22:40.359276   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.359793   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.359818   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.360152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.360750   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.360794   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.375531   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0916 10:22:40.375999   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.376410   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.376431   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.376712   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.376880   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:40.378466   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:40.378706   12265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:22:40.378736   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:40.381488   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.381978   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:40.381997   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.382162   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:40.382374   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:40.382527   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:40.382728   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:41.185716   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.148787276s)
	I0916 10:22:41.185775   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185787   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185792   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.831162948s)
	I0916 10:22:41.185821   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185842   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185899   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.76548291s)
	I0916 10:22:41.185927   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185929   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.663383888s)
	I0916 10:22:41.185940   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185948   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185957   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186031   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616165984s)
	I0916 10:22:41.186072   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186084   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186162   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.55623363s)
	I0916 10:22:41.186179   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186188   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186223   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186233   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186246   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186249   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186272   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186279   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186321   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.438489786s)
	W0916 10:22:41.186349   12265 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186370   12265 retry.go:31] will retry after 282.502814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186323   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186452   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.360528333s)
	I0916 10:22:41.186474   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186483   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186530   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186552   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186580   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186592   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.133220852s)
	I0916 10:22:41.186602   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186608   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186609   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186627   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186684   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186691   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186698   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186704   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186797   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186826   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186833   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186851   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186871   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186884   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186893   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186901   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186907   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186936   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186943   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186990   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186999   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187006   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187013   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.187860   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.187892   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.187899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187912   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.188173   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.188191   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188200   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188204   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188209   12265 addons.go:475] Verifying addon metrics-server=true in "addons-001438"
	I0916 10:22:41.188211   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188241   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188250   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188259   12265 addons.go:475] Verifying addon ingress=true in "addons-001438"
	I0916 10:22:41.190004   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190036   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190042   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190099   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190137   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190141   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190152   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190155   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190159   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.190162   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190167   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.190170   12265 addons.go:475] Verifying addon registry=true in "addons-001438"
	I0916 10:22:41.190534   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190570   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190579   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.191944   12265 out.go:177] * Verifying registry addon...
	I0916 10:22:41.191953   12265 out.go:177] * Verifying ingress addon...
	I0916 10:22:41.192858   12265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-001438 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:22:41.245022   12265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:22:41.245042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:41.245048   12265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:22:41.245062   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.270906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.270924   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.271190   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.271210   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.469044   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:41.699366   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.699576   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.200823   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.201220   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.707853   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.708238   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.062276   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.056308906s)
	I0916 10:22:43.062328   12265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.428733709s)
	I0916 10:22:43.062359   12265 api_server.go:72] duration metric: took 10.72580389s to wait for apiserver process to appear ...
	I0916 10:22:43.062372   12265 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:22:43.062397   12265 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0916 10:22:43.062411   12265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.683683571s)
	I0916 10:22:43.062334   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062455   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.062799   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:43.062819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.062830   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.062838   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062846   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.063060   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.063085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.063094   12265 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:43.064955   12265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:22:43.065015   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:43.066605   12265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:22:43.067509   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:22:43.067847   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:22:43.067859   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:22:43.093271   12265 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0916 10:22:43.093820   12265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:22:43.093839   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.095011   12265 api_server.go:141] control plane version: v1.31.1
	I0916 10:22:43.095033   12265 api_server.go:131] duration metric: took 32.653755ms to wait for apiserver health ...
	I0916 10:22:43.095043   12265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:22:43.123828   12265 system_pods.go:59] 19 kube-system pods found
	I0916 10:22:43.123858   12265 system_pods.go:61] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.123864   12265 system_pods.go:61] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.123871   12265 system_pods.go:61] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.123876   12265 system_pods.go:61] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.123883   12265 system_pods.go:61] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.123886   12265 system_pods.go:61] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.123903   12265 system_pods.go:61] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.123906   12265 system_pods.go:61] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.123913   12265 system_pods.go:61] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.123917   12265 system_pods.go:61] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.123923   12265 system_pods.go:61] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.123928   12265 system_pods.go:61] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.123935   12265 system_pods.go:61] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.123943   12265 system_pods.go:61] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.123948   12265 system_pods.go:61] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.123955   12265 system_pods.go:61] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123960   12265 system_pods.go:61] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123967   12265 system_pods.go:61] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.123972   12265 system_pods.go:61] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.123980   12265 system_pods.go:74] duration metric: took 28.931422ms to wait for pod list to return data ...
	I0916 10:22:43.123988   12265 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:22:43.137057   12265 default_sa.go:45] found service account: "default"
	I0916 10:22:43.137084   12265 default_sa.go:55] duration metric: took 13.088793ms for default service account to be created ...
	I0916 10:22:43.137095   12265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:22:43.166020   12265 system_pods.go:86] 19 kube-system pods found
	I0916 10:22:43.166054   12265 system_pods.go:89] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.166063   12265 system_pods.go:89] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.166075   12265 system_pods.go:89] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.166088   12265 system_pods.go:89] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.166100   12265 system_pods.go:89] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.166108   12265 system_pods.go:89] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.166118   12265 system_pods.go:89] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.166126   12265 system_pods.go:89] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.166136   12265 system_pods.go:89] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.166145   12265 system_pods.go:89] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.166154   12265 system_pods.go:89] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.166164   12265 system_pods.go:89] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.166171   12265 system_pods.go:89] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.166178   12265 system_pods.go:89] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.166183   12265 system_pods.go:89] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.166199   12265 system_pods.go:89] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166207   12265 system_pods.go:89] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166217   12265 system_pods.go:89] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.166224   12265 system_pods.go:89] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.166231   12265 system_pods.go:126] duration metric: took 29.130167ms to wait for k8s-apps to be running ...
	I0916 10:22:43.166241   12265 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:22:43.166284   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:22:43.202957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.204822   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:43.205240   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:22:43.205259   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:22:43.339484   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.339511   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:22:43.533725   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.574829   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.701096   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.702516   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.074326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.199962   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.201086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:44.420432   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.951340242s)
	I0916 10:22:44.420484   12265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.25416987s)
	I0916 10:22:44.420496   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.420512   12265 system_svc.go:56] duration metric: took 1.254267923s WaitForService to wait for kubelet
	I0916 10:22:44.420530   12265 kubeadm.go:582] duration metric: took 12.083973387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:22:44.420555   12265 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:22:44.420516   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.420960   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.420998   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421011   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.421019   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.421041   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.421242   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.421289   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421306   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.432407   12265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:22:44.432433   12265 node_conditions.go:123] node cpu capacity is 2
	I0916 10:22:44.432443   12265 node_conditions.go:105] duration metric: took 11.883273ms to run NodePressure ...
	I0916 10:22:44.432454   12265 start.go:241] waiting for startup goroutines ...
	I0916 10:22:44.573423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.701968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.702167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.087788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.175284   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.64151941s)
	I0916 10:22:45.175340   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175356   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175638   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175658   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175667   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175675   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175907   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175959   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175966   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:45.176874   12265 addons.go:475] Verifying addon gcp-auth=true in "addons-001438"
	I0916 10:22:45.179151   12265 out.go:177] * Verifying gcp-auth addon...
	I0916 10:22:45.181042   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:22:45.204765   12265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:22:45.204788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.240576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.244884   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.572763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.684678   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.699294   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.700332   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.071926   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.184345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.198555   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.198584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.572691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.686213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.698404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.699290   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.073014   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.184892   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.199176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.199412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.573319   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.685117   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.698854   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.699042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.080702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.186042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.198652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:48.198985   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.572136   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.684922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.698643   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.698805   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.072263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.186126   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.198845   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.201291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.571909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.686134   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.699485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.699837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.072013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.185475   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.198803   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:50.198988   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.572410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.684716   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.698643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.698842   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.072735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.185327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.198402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.198563   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.574099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.684301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.698582   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.699135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.073280   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.184410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.197628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.197951   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.573111   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.685463   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.698350   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.698445   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.073318   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.185032   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.198371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.198982   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.572652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.684593   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.698434   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.699099   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.071466   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.184978   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.199125   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:54.199475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.684904   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.699578   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.700868   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.072026   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.186696   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.199421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.200454   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:55.811368   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.811883   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.811882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.812044   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.073000   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.197552   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.571945   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.684725   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.698164   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.698871   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.078099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.187093   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.198266   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.198788   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.572608   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.685182   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.698112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.698451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.072438   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.184226   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.197871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:58.199176   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.573655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.688012   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.698890   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.699498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.072908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.197825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.198094   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:59.572578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.685886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.699165   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.699539   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.072677   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.185334   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.198436   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.572620   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.684676   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.698184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.698937   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.368315   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.368647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:01.368662   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.369057   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.577610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.685792   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.699073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.700679   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.073297   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.184780   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.198423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.198632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.573860   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.688317   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.699137   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.699189   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.073268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.185286   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.197706   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:03.199446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.575016   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.688681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.697852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.699284   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.072561   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.184550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.198183   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.198692   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.573058   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.684410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.698448   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.699101   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.073082   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.198422   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.199510   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.572901   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.685013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.698419   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.699052   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.072680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.184899   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.199400   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.199960   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.573550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.698176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.386744   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.389015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:07.389529   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.391740   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.572440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.685517   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.699276   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.699495   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.073598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.185305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.198307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.198701   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.572936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.685042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.697898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.699045   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.073524   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.185170   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.197444   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.198282   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:09.571947   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.685269   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.700263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.700289   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.072367   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.184140   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.198279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.198501   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.571995   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.684443   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.698621   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.699212   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.072647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.184997   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.198336   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.199743   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.572138   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.684642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.697735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.698012   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.072087   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.184730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.198825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.199117   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.574471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.697610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.697875   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.074276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.200283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:13.200511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.572643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.687229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.700375   12265 kapi.go:107] duration metric: took 32.506622173s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:13.700476   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.073345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.185359   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.197920   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.714386   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.714848   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.072480   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.184006   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.198907   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.571536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.686990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.698314   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.072850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.397705   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.398059   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.571699   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.687893   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.701822   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.073078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.185433   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.202339   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.572915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.684909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.698215   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.071875   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.185548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.198104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.572180   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.684990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.698912   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.072105   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.184341   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.197977   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.571740   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.685205   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.698214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.071811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.184927   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.198225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.572184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.684471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.697550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.072526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.185439   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.198086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.573843   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.684530   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.699027   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.071583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.185751   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.201330   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.574078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.688728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.700516   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.072848   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.184719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.571975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.697845   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.071885   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.199755   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.209742   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.572903   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.684095   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.697255   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.072405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.185096   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.197451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.572250   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.685603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.699421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.072277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.197948   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.572954   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.684305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.698018   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.072121   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.186632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.198260   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.571710   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.685260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.697569   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.072712   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.185404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.197839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.572506   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.685719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.698390   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.073440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.185211   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.198135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.572871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.684795   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.698442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.074307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.184391   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.198195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.571684   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.686595   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.697829   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.072882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.184355   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.197913   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.572796   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.685340   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.697838   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.072358   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.185072   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.198841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.572260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.685619   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.697923   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.072329   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.184923   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.198461   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.572531   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.684886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.698221   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.071922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.184896   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.198347   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.572508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.685674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.698172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.072040   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.184401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.198192   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.571685   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.684934   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.699442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.072917   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.184575   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.197989   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.572782   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.685224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.697515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.073347   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.184633   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.198109   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.572239   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.684842   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.698412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.072639   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.184377   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.197723   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.572964   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.684944   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.698216   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.071865   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.184322   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.197583   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.572728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.697663   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.073346   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.184763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.198338   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.572748   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.688546   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.698337   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.072528   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.184742   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.197991   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.572832   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.685275   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.697957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.072948   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.185237   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.198222   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.572150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.685770   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.698107   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.072508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.198122   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.571791   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.685476   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.698021   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.072455   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.198450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.685519   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.698088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.073394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.184852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.198932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.572905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.685024   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.699000   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.072804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.185568   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.198040   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.571961   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.684879   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.698104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.071779   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.184794   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.198431   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.572786   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.685048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.701841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.072550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.184915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.198725   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.572850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.684405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.697953   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.075719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.185584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.198034   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.572642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.685074   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.697421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.072216   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.184736   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.198614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.572675   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.685508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.697632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.072878   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.185267   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.197508   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.684680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.698038   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.072225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.184256   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.197802   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.685760   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.699050   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.072698   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.185139   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.197417   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.572526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.684976   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.698186   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.071987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.184373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.197898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.573326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.685154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.699596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.071975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.184301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.197532   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.573068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.684535   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.698262   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.071830   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.185558   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.198149   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.684135   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.697614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.109030   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.216004   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.216775   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.572732   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.684811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.697899   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.071691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.198291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.572185   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.685478   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.698240   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.072727   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.185578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.207485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.684402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.698565   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.072447   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.192764   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.206954   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.573224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.685091   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.697997   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.071906   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.184428   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.197550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.572498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.685525   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.702647   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.072504   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.185219   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.197512   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.573858   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.685938   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.699556   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.080160   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.188056   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.197615   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.575213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.685187   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.697887   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.072585   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.185321   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.577876   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.685259   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.698763   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.073356   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.184332   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.197676   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.574632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.705119   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.705797   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.073702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.190460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.199492   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.573521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.685468   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.697671   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.074427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.211989   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.214167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.573479   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.684919   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.698441   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.184827   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.573401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.685277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.698457   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.072421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.184959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.198365   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.572446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.685036   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.697443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.072489   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.185143   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.197711   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.572704   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.685206   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.697839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.073656   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.185083   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.197443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.572739   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.685343   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.697853   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.072697   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.185630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.197928   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.572344   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.684814   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.698225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.073324   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.185254   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.198404   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.571987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.684709   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.698073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.072174   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.184688   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.198078   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.571798   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.685576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.698188   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.072810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.184683   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.198053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.574408   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.698415   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.072047   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.185423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.198010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.572968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.684217   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.697876   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.073276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.185372   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.197865   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.572327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.684929   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.698146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.073068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.185261   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.197596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.684379   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.697450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.072646   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.184810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.198157   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.684635   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.698108   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.073055   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.185325   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.572951   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.684268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.697542   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.073300   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.184458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.198058   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.571882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.684389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.698491   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.185150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.198444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.572557   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.686730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.697987   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.072389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.184902   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.198815   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.572090   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.684279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.072655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.185118   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.197515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.573029   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.684503   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.697942   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.073161   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.185394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.197824   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.572789   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.685536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.072248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.184713   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.198206   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.572681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.685404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.697732   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.073033   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.186532   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.197932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.573166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.684900   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.698494   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.072840   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.185112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.199554   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.573533   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.685513   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.698631   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.073563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.184668   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.198960   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.573373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.684371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.698226   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.072380   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.184889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.572427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.685015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.699053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.073225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.185241   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.198172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.572019   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.697511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.072382   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.185154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.198590   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.572333   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.688804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.699195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.072971   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.184395   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.197840   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.572457   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.684935   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.698247   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.072201   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.184817   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.198299   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.572603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.684807   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.698764   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.079460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.184783   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.198219   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.572155   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.684462   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.698249   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.071889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.185035   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.198639   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.572607   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.684993   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.698317   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.073167   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.187630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.197861   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.684449   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.698084   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.072598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.184553   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.198241   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.572543   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.685061   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.698066   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.072888   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.184279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.198475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.572908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.684166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.699214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.071396   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.185054   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.197274   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.571831   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.683617   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.073753   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.184818   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.198303   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.572754   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.685078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.697801   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.074144   12265 kapi.go:107] duration metric: took 1m59.00663205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:42.185287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.197975   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.685826   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.698484   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.185521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.197894   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.684695   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.698444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.184270   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.198072   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.686127   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.697760   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.184583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.197892   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.685284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.698273   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.197597   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.684852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.698234   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.185674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.197778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.684803   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.698286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.185195   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.197536   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.684936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.698202   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.185940   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.198354   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.685628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.698017   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.184172   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.197513   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.684563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.699121   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.185458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.197627   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.684548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.697728   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.184587   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.198088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.687284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.697762   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.185441   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.684856   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.698392   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.184966   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.198309   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.685059   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.697818   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.184799   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.199146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.685287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.697823   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.184982   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.198778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.684629   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.698010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.185306   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.197805   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.686354   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.697789   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.184048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.198685   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.685283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.697967   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.185357   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.198462   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.685857   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.698582   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.185027   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.199070   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.685248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.697584   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.444242   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.542180   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.684941   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.698345   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.184494   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.199673   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.686844   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.701197   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.186108   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.200286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.935418   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.936940   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.185837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.198343   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.685229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.697687   12265 kapi.go:107] duration metric: took 2m23.503933898s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:05.184162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.686162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.184784   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.685596   12265 kapi.go:107] duration metric: took 2m21.504550895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:06.687290   12265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-001438 cluster.
	I0916 10:25:06.688726   12265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:06.689940   12265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:06.691195   12265 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:06.692654   12265 addons.go:510] duration metric: took 2m34.356008246s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:06.692692   12265 start.go:246] waiting for cluster config update ...
	I0916 10:25:06.692714   12265 start.go:255] writing updated cluster config ...
	I0916 10:25:06.692960   12265 ssh_runner.go:195] Run: rm -f paused
	I0916 10:25:06.701459   12265 out.go:177] * Done! kubectl is now configured to use "addons-001438" cluster and "default" namespace by default
	E0916 10:25:06.702711   12265 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.122707692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482676122678465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0e4f07c-66c1-4f4b-9c30-2d76324d0864 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.123244131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed0999e7-bb59-4102-b1e3-0024d246f29a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.124551411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed0999e7-bb59-4102-b1e3-0024d246f29a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.129036508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_R
UNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17264821428
45031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142
832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed0999e7-bb59-4102-b1e3-0024d246f29a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.164628096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c58e8258-eb59-498f-8dde-ea87579d3f4c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.164759677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c58e8258-eb59-498f-8dde-ea87579d3f4c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.167035022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57720f93-1707-4d1e-8abd-874cf517a05c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.168232863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482676168204263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57720f93-1707-4d1e-8abd-874cf517a05c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.168790360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9284b25-423b-4c06-ac4d-e2d2cdcfa445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.168850073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9284b25-423b-4c06-ac4d-e2d2cdcfa445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.169313529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_R
UNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17264821428
45031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142
832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9284b25-423b-4c06-ac4d-e2d2cdcfa445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.213097346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6d83e64-c220-47f4-9358-ecd2e68ca196 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.213170243Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6d83e64-c220-47f4-9358-ecd2e68ca196 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.215299949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f032ab9-20e7-4a8a-b3cb-d6c1423bca81 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.216335336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482676216309855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f032ab9-20e7-4a8a-b3cb-d6c1423bca81 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.217074210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0f863a8-0d45-4525-90b6-b3c48f9055da name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.217275892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0f863a8-0d45-4525-90b6-b3c48f9055da name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.218423731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_R
UNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17264821428
45031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142
832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0f863a8-0d45-4525-90b6-b3c48f9055da name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.258087381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecca5e18-fee7-43b7-9214-2beb739d6c7c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.258164254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecca5e18-fee7-43b7-9214-2beb739d6c7c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.259141173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79efdedb-f37c-499e-90fc-f9be41b2f9ca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.260202031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482676260172646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79efdedb-f37c-499e-90fc-f9be41b2f9ca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.260970528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67813497-779a-4494-aa49-0042764c7691 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.261040038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67813497-779a-4494-aa49-0042764c7691 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:31:16 addons-001438 crio[662]: time="2024-09-16 10:31:16.261674907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_R
UNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17264821428
45031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d909d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142
832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67813497-779a-4494-aa49-0042764c7691 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c0c62d19fc341       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 6 minutes ago       Running             gcp-auth                                 0                   81638f0641649       gcp-auth-89d5ffd79-jg5wz
	4d9f00ee52087       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             6 minutes ago       Running             controller                               0                   f0a70a6b5b4fa       ingress-nginx-controller-bc57996ff-jhd4w
	a4ff4f2e6c350       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	fa45fa1d889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	112e37da6f1b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	bcd9404de3e14       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	26165c7625a62       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	35e24c1abefe7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   bf02d50932f14       csi-hostpath-resizer-0
	a5edaf3e2dd3d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	b8ebd2f050729       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   f375334740e2f       csi-hostpath-attacher-0
	0d52d2269e100       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             7 minutes ago       Exited              patch                                    1                   6fe91ac2288fe       ingress-nginx-admission-patch-rls9n
	54c4347a1fc2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   7 minutes ago       Exited              create                                   0                   d66b1317412a7       ingress-nginx-admission-create-dk6l8
	f0bde3324c47d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   0eef20d1c6813       snapshot-controller-56fcc65765-pv2sr
	f786c20ceffe3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   ec33782f42717       snapshot-controller-56fcc65765-8nq94
	d997d75b48ee4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   173b48ab2ab7f       local-path-provisioner-86d989889c-rj67m
	8193aad1beb5b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             8 minutes ago       Running             minikube-ingress-dns                     0                   f1a3772ce5f7d       kube-ingress-dns-minikube
	20d2f3360f320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   748d363148f66       storage-provisioner
	63d270cbed8d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             8 minutes ago       Running             coredns                                  0                   42b8586a7b29a       coredns-7c65d6cfc9-j5ndn
	60269ac0552c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             8 minutes ago       Running             kube-proxy                               0                   2bf9dc368debd       kube-proxy-66flj
	1aabe5cb48f97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             8 minutes ago       Running             etcd                                     0                   f7aeaa11c7f4c       etcd-addons-001438
	2d34a4e3596c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             8 minutes ago       Running             kube-controller-manager                  0                   8a68216be6dee       kube-controller-manager-addons-001438
	bfff5b2d37985       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             8 minutes ago       Running             kube-apiserver                           0                   81f095a38dae1       kube-apiserver-addons-001438
	5a4816dc33e76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             8 minutes ago       Running             kube-scheduler                           0                   ec134844260ab       kube-scheduler-addons-001438
	
	
	==> coredns [63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce] <==
	[INFO] 127.0.0.1:32820 - 49588 "HINFO IN 5683833228926934535.5808779734602365342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027869673s
	[INFO] 10.244.0.7:47242 - 15842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000350783s
	[INFO] 10.244.0.7:47242 - 29412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155576s
	[INFO] 10.244.0.7:51495 - 23321 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115255s
	[INFO] 10.244.0.7:51495 - 47135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085371s
	[INFO] 10.244.0.7:40689 - 10301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114089s
	[INFO] 10.244.0.7:40689 - 30779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011843s
	[INFO] 10.244.0.7:53526 - 19539 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127604s
	[INFO] 10.244.0.7:53526 - 34381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109337s
	[INFO] 10.244.0.7:39182 - 43658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075802s
	[INFO] 10.244.0.7:39182 - 55433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031766s
	[INFO] 10.244.0.7:52628 - 35000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037386s
	[INFO] 10.244.0.7:52628 - 44218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027585s
	[INFO] 10.244.0.7:47656 - 61837 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028204s
	[INFO] 10.244.0.7:47656 - 39571 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027731s
	[INFO] 10.244.0.7:53964 - 36235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098663s
	[INFO] 10.244.0.7:53964 - 55690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045022s
	[INFO] 10.244.0.22:49146 - 11336 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543634s
	[INFO] 10.244.0.22:44900 - 51750 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125434s
	[INFO] 10.244.0.22:47266 - 27362 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158517s
	[INFO] 10.244.0.22:53077 - 63050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068888s
	[INFO] 10.244.0.22:52796 - 34381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101059s
	[INFO] 10.244.0.22:52167 - 15594 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126468s
	[INFO] 10.244.0.22:42107 - 54869 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149176s
	[INFO] 10.244.0.22:60865 - 20616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006078154s
	
	
	==> describe nodes <==
	Name:               addons-001438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-001438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-001438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-001438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-001438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-001438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:31:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:31:09 +0000   Mon, 16 Sep 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-001438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b69a913a20a4259950d0bf801229c28
	  System UUID:                8b69a913-a20a-4259-950d-0bf801229c28
	  Boot ID:                    7d21de27-dd4e-4002-9fc0-df14a0ff761f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-jg5wz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jhd4w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m36s
	  kube-system                 coredns-7c65d6cfc9-j5ndn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m43s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 csi-hostpathplugin-xgk62                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 etcd-addons-001438                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m49s
	  kube-system                 kube-apiserver-addons-001438                250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kube-controller-manager-addons-001438       200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 kube-proxy-66flj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kube-scheduler-addons-001438                100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 snapshot-controller-56fcc65765-8nq94        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 snapshot-controller-56fcc65765-pv2sr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  local-path-storage          local-path-provisioner-86d989889c-rj67m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jnpkm              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     8m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m40s  kube-proxy       
	  Normal  Starting                 8m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m48s  kubelet          Node addons-001438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m48s  kubelet          Node addons-001438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m48s  kubelet          Node addons-001438 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m47s  kubelet          Node addons-001438 status is now: NodeReady
	  Normal  RegisteredNode           8m44s  node-controller  Node addons-001438 event: Registered Node addons-001438 in Controller
	
	
	==> dmesg <==
	[  +4.002627] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.196359] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +0.061696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999876] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.091472] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.774952] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +1.497885] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.466780] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.018877] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.254117] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.876932] kauditd_printk_skb: 7 callbacks suppressed
	[ +33.888489] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.263650] kauditd_printk_skb: 76 callbacks suppressed
	[ +48.109785] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 10:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.297596] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.818881] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.121137] kauditd_printk_skb: 19 callbacks suppressed
	[ +29.616490] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.276540] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:27] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 10:31] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84] <==
	{"level":"info","ts":"2024-09-16T10:25:01.423722Z","caller":"traceutil/trace.go:171","msg":"trace[1526018823] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"284.258855ms","start":"2024-09-16T10:25:01.139452Z","end":"2024-09-16T10:25:01.423711Z","steps":["trace[1526018823] 'process raft request'  (duration: 284.165558ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.424593Z","caller":"traceutil/trace.go:171","msg":"trace[1620023283] linearizableReadLoop","detail":"{readStateIndex:1296; appliedIndex:1296; }","duration":"253.838283ms","start":"2024-09-16T10:25:01.170745Z","end":"2024-09-16T10:25:01.424583Z","steps":["trace[1620023283] 'read index received'  (duration: 253.835456ms)","trace[1620023283] 'applied index is now lower than readState.Index'  (duration: 2.263µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:01.424681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.948565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.424719Z","caller":"traceutil/trace.go:171","msg":"trace[1658095100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"253.992891ms","start":"2024-09-16T10:25:01.170719Z","end":"2024-09-16T10:25:01.424712Z","steps":["trace[1658095100] 'agreement among raft nodes before linearized reading'  (duration: 253.933158ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.430878Z","caller":"traceutil/trace.go:171","msg":"trace[196824448] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"219.615242ms","start":"2024-09-16T10:25:01.211190Z","end":"2024-09-16T10:25:01.430805Z","steps":["trace[196824448] 'process raft request'  (duration: 217.799649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.432286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.218738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.432549Z","caller":"traceutil/trace.go:171","msg":"trace[1250515915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"248.433899ms","start":"2024-09-16T10:25:01.183901Z","end":"2024-09-16T10:25:01.432335Z","steps":["trace[1250515915] 'agreement among raft nodes before linearized reading'  (duration: 246.789324ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917472Z","caller":"traceutil/trace.go:171","msg":"trace[1132617141] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"256.411132ms","start":"2024-09-16T10:25:03.661047Z","end":"2024-09-16T10:25:03.917458Z","steps":["trace[1132617141] 'read index received'  (duration: 256.216658ms)","trace[1132617141] 'applied index is now lower than readState.Index'  (duration: 193.873µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:03.917646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.564415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917689Z","caller":"traceutil/trace.go:171","msg":"trace[1681803764] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1254; }","duration":"256.635309ms","start":"2024-09-16T10:25:03.661043Z","end":"2024-09-16T10:25:03.917678Z","steps":["trace[1681803764] 'agreement among raft nodes before linearized reading'  (duration: 256.524591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.498369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917721Z","caller":"traceutil/trace.go:171","msg":"trace[320039730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"246.52737ms","start":"2024-09-16T10:25:03.671187Z","end":"2024-09-16T10:25:03.917715Z","steps":["trace[320039730] 'agreement among raft nodes before linearized reading'  (duration: 246.484981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.395252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917834Z","caller":"traceutil/trace.go:171","msg":"trace[699037525] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"461.96825ms","start":"2024-09-16T10:25:03.455860Z","end":"2024-09-16T10:25:03.917828Z","steps":["trace[699037525] 'process raft request'  (duration: 461.454179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917838Z","caller":"traceutil/trace.go:171","msg":"trace[618256897] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"234.40851ms","start":"2024-09-16T10:25:03.683425Z","end":"2024-09-16T10:25:03.917833Z","steps":["trace[618256897] 'agreement among raft nodes before linearized reading'  (duration: 234.386479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:03.455845Z","time spent":"462.003063ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.523876Z","caller":"traceutil/trace.go:171","msg":"trace[565706559] transaction","detail":"{read_only:false; response_revision:1399; number_of_response:1; }","duration":"393.956218ms","start":"2024-09-16T10:25:42.129887Z","end":"2024-09-16T10:25:42.523844Z","steps":["trace[565706559] 'process raft request'  (duration: 393.821788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.524080Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.129864Z","time spent":"394.119545ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1398 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.533976Z","caller":"traceutil/trace.go:171","msg":"trace[668376333] linearizableReadLoop","detail":"{readStateIndex:1459; appliedIndex:1458; }","duration":"302.69985ms","start":"2024-09-16T10:25:42.231262Z","end":"2024-09-16T10:25:42.533962Z","steps":["trace[668376333] 'read index received'  (duration: 293.491454ms)","trace[668376333] 'applied index is now lower than readState.Index'  (duration: 9.207628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:42.535969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.605451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-09-16T10:25:42.536065Z","caller":"traceutil/trace.go:171","msg":"trace[19888550] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1400; }","duration":"205.726154ms","start":"2024-09-16T10:25:42.330329Z","end":"2024-09-16T10:25:42.536056Z","steps":["trace[19888550] 'agreement among raft nodes before linearized reading'  (duration: 205.527055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.536191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.924785ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:42.536244Z","caller":"traceutil/trace.go:171","msg":"trace[1740705082] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1400; }","duration":"304.971706ms","start":"2024-09-16T10:25:42.231257Z","end":"2024-09-16T10:25:42.536228Z","steps":["trace[1740705082] 'agreement among raft nodes before linearized reading'  (duration: 304.915956ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:42.537030Z","caller":"traceutil/trace.go:171","msg":"trace[778126279] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"337.225123ms","start":"2024-09-16T10:25:42.199749Z","end":"2024-09-16T10:25:42.536974Z","steps":["trace[778126279] 'process raft request'  (duration: 333.931313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.537228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.199733Z","time spent":"337.391985ms","remote":"127.0.0.1:51498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-001438\" mod_revision:1384 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-001438\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-001438\" > >"}
	
	
	==> gcp-auth [c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7] <==
	2024/09/16 10:25:06 GCP Auth Webhook started!
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	
	
	==> kernel <==
	 10:31:16 up 9 min,  0 users,  load average: 0.05, 0.48, 0.40
	Linux addons-001438 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77] <==
	I0916 10:22:40.932409       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0916 10:22:42.426039       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.146.100"}
	I0916 10:22:42.456409       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:22:42.660969       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.102.193"}
	I0916 10:22:44.945009       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.134.141"}
	W0916 10:23:38.948410       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.948711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:23:38.949896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:23:38.958493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.958543       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 10:23:38.959752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0916 10:24:18.395108       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:18.395300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:18.395442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 10:24:18.398244       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	I0916 10:24:18.453414       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:25:09.633337       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.80.80"}
	I0916 10:27:07.962789       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:27:08.990230       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3] <==
	I0916 10:27:16.859651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="77.465µs"
	W0916 10:27:17.976531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:17.976597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:18.171334       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:27:19.596965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="4.818µs"
	W0916 10:27:29.140580       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:29.140708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:27:32.400681       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:27:32.400818       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:27:32.833300       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:27:32.833453       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 10:27:52.111053       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:52.111207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:28:17.834164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:28:17.834292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:29:03.861818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="211.968µs"
	W0916 10:29:17.755994       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:29:17.756149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:29:18.856763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="136.61µs"
	W0916 10:30:11.061208       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:11.061443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:30:57.147741       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:57.147896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:31:09.101904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.613µs"
	I0916 10:31:09.180185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	
	
	==> kube-proxy [60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:22:35.282699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:22:35.409784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0916 10:22:35.409847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:22:36.135283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:22:36.135476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:22:36.135545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:22:36.146626       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:22:36.146849       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:22:36.146861       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:22:36.156579       1 config.go:199] "Starting service config controller"
	I0916 10:22:36.156604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:22:36.166809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:22:36.166838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:22:36.168180       1 config.go:328] "Starting node config controller"
	I0916 10:22:36.168189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:22:36.258515       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:22:36.268518       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:22:36.268639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237] <==
	W0916 10:22:25.363221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:25.363254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:22:25.363420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:22:25.363573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:22:25.363425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:25.363941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.174422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:22:26.174473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.225213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:26.225308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.333904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:22:26.333957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.350221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:22:26.350326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.406843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:26.406982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.446248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:22:26.446395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.547116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:22:26.547206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.704254       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:22:26.704303       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:22:28.953769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:30:28 addons-001438 kubelet[1200]: E0916 10:30:28.226096    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482628225334910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:28 addons-001438 kubelet[1200]: E0916 10:30:28.226123    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482628225334910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:34 addons-001438 kubelet[1200]: E0916 10:30:34.841692    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\"\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:30:38 addons-001438 kubelet[1200]: E0916 10:30:38.228542    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482638228076062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:38 addons-001438 kubelet[1200]: E0916 10:30:38.228926    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482638228076062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:40 addons-001438 kubelet[1200]: I0916 10:30:40.839662    1200 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-j5ndn" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:30:48 addons-001438 kubelet[1200]: E0916 10:30:48.232295    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482648231815580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:48 addons-001438 kubelet[1200]: E0916 10:30:48.232991    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482648231815580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:49 addons-001438 kubelet[1200]: E0916 10:30:49.840427    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\"\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:30:58 addons-001438 kubelet[1200]: E0916 10:30:58.235433    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482658234973287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:30:58 addons-001438 kubelet[1200]: E0916 10:30:58.235474    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482658234973287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:01 addons-001438 kubelet[1200]: E0916 10:31:01.843288    1200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\"\"" pod="yakd-dashboard/yakd-dashboard-67d98fc6b-jnpkm" podUID="7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981"
	Sep 16 10:31:08 addons-001438 kubelet[1200]: E0916 10:31:08.239282    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482668238871323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:08 addons-001438 kubelet[1200]: E0916 10:31:08.239653    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482668238871323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.552849    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-tmp-dir\") pod \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\" (UID: \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\") "
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.552910    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfr2l\" (UniqueName: \"kubernetes.io/projected/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-kube-api-access-nfr2l\") pod \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\" (UID: \"76382ab7-9b7a-4bd6-b19c-7a77ba051f1d\") "
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.553725    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d" (UID: "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.557317    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-kube-api-access-nfr2l" (OuterVolumeSpecName: "kube-api-access-nfr2l") pod "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d" (UID: "76382ab7-9b7a-4bd6-b19c-7a77ba051f1d"). InnerVolumeSpecName "kube-api-access-nfr2l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.653408    1200 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-tmp-dir\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.653485    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nfr2l\" (UniqueName: \"kubernetes.io/projected/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d-kube-api-access-nfr2l\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.764247    1200 scope.go:117] "RemoveContainer" containerID="0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.797878    1200 scope.go:117] "RemoveContainer" containerID="0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: E0916 10:31:10.801088    1200 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\": container with ID starting with 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba not found: ID does not exist" containerID="0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"
	Sep 16 10:31:10 addons-001438 kubelet[1200]: I0916 10:31:10.801139    1200 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba"} err="failed to get container status \"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\": rpc error: code = NotFound desc = could not find container \"0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba\": container with ID starting with 0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba not found: ID does not exist"
	Sep 16 10:31:11 addons-001438 kubelet[1200]: I0916 10:31:11.843498    1200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76382ab7-9b7a-4bd6-b19c-7a77ba051f1d" path="/var/lib/kubelet/pods/76382ab7-9b7a-4bd6-b19c-7a77ba051f1d/volumes"
	
	
	==> storage-provisioner [20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e] <==
	I0916 10:22:41.307950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:22:41.369058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:22:41.369154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:22:41.391597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:22:41.391782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	I0916 10:22:41.394290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b3cde4-08a8-47d7-a9cc-7251679ab4d1", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b became leader
	I0916 10:22:41.492688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (474.315µs)
helpers_test.go:263: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/CSI (362.04s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-001438 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:982: (dbg) Non-zero exit: kubectl --context addons-001438 apply -f testdata/storage-provisioner-rancher/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (354.856µs)
addons_test.go:984: kubectl apply pvc.yaml failed: args "kubectl --context addons-001438 apply -f testdata/storage-provisioner-rancher/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (122.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jnpkm" [7d5fb34e-a0b6-4b26-9fd6-2ecc1ecc3981] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:329: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1072: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1072: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
addons_test.go:1072: TestAddons/parallel/Yakd: showing logs for failed pods as of 2024-09-16 10:27:09.072503143 +0000 UTC m=+343.295621593
addons_test.go:1072: (dbg) Run:  kubectl --context addons-001438 describe po yakd-dashboard-67d98fc6b-jnpkm -n yakd-dashboard
addons_test.go:1072: (dbg) Non-zero exit: kubectl --context addons-001438 describe po yakd-dashboard-67d98fc6b-jnpkm -n yakd-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (374.409µs)
addons_test.go:1072: kubectl --context addons-001438 describe po yakd-dashboard-67d98fc6b-jnpkm -n yakd-dashboard: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:1072: (dbg) Run:  kubectl --context addons-001438 logs yakd-dashboard-67d98fc6b-jnpkm -n yakd-dashboard
addons_test.go:1072: (dbg) Non-zero exit: kubectl --context addons-001438 logs yakd-dashboard-67d98fc6b-jnpkm -n yakd-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (358.859µs)
addons_test.go:1072: kubectl --context addons-001438 logs yakd-dashboard-67d98fc6b-jnpkm -n yakd-dashboard: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:1073: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-001438 -n addons-001438
helpers_test.go:244: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 logs -n 25: (1.357203705s)
helpers_test.go:252: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581              | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-573915              | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | --download-only -p                   | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-928489                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42715               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-928489              | binary-mirror-928489 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                  | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	| start   | -p addons-001438 --wait=true         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | -p addons-001438                     |                      |         |         |                     |                     |
	| ip      | addons-001438 ip                     | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:25 UTC | 16 Sep 24 10:25 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-001438 addons disable         | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-001438        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC |                     |
	|         | addons-001438                        |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:42.990297   12265 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:42.990427   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990438   12265 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:42.990444   12265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:42.990619   12265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:42.991237   12265 out.go:352] Setting JSON to false
	I0916 10:21:42.992075   12265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":253,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:42.992165   12265 start.go:139] virtualization: kvm guest
	I0916 10:21:42.994057   12265 out.go:177] * [addons-001438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:42.995363   12265 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:21:42.995366   12265 notify.go:220] Checking for updates...
	I0916 10:21:42.996620   12265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:42.997884   12265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:42.999244   12265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.000448   12265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:21:43.001744   12265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:21:43.003140   12265 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:43.035292   12265 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:21:43.036591   12265 start.go:297] selected driver: kvm2
	I0916 10:21:43.036604   12265 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:43.036617   12265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:21:43.037618   12265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.037687   12265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:43.052612   12265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:43.052654   12265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:43.052880   12265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:21:43.052910   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:21:43.052948   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:43.052956   12265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:43.053000   12265 start.go:340] cluster config:
	{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:43.053089   12265 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:43.054779   12265 out.go:177] * Starting "addons-001438" primary control-plane node in "addons-001438" cluster
	I0916 10:21:43.056048   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:21:43.056073   12265 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:43.056099   12265 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:43.056171   12265 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:21:43.056181   12265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:21:43.056464   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:21:43.056479   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json: {Name:mke7feffe145119f1110e818375562c2195d4fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:21:43.056601   12265 start.go:360] acquireMachinesLock for addons-001438: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:21:43.056638   12265 start.go:364] duration metric: took 25.099µs to acquireMachinesLock for "addons-001438"
	I0916 10:21:43.056653   12265 start.go:93] Provisioning new machine with config: &{Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:21:43.056703   12265 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:21:43.058226   12265 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:21:43.058340   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:21:43.058376   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:21:43.072993   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0916 10:21:43.073475   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:21:43.073995   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:21:43.074020   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:21:43.074422   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:21:43.074620   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:21:43.074787   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:21:43.074946   12265 start.go:159] libmachine.API.Create for "addons-001438" (driver="kvm2")
	I0916 10:21:43.074989   12265 client.go:168] LocalClient.Create starting
	I0916 10:21:43.075021   12265 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:21:43.311518   12265 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:21:43.475888   12265 main.go:141] libmachine: Running pre-create checks...
	I0916 10:21:43.475917   12265 main.go:141] libmachine: (addons-001438) Calling .PreCreateCheck
	I0916 10:21:43.476396   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:21:43.476796   12265 main.go:141] libmachine: Creating machine...
	I0916 10:21:43.476809   12265 main.go:141] libmachine: (addons-001438) Calling .Create
	I0916 10:21:43.476954   12265 main.go:141] libmachine: (addons-001438) Creating KVM machine...
	I0916 10:21:43.478137   12265 main.go:141] libmachine: (addons-001438) DBG | found existing default KVM network
	I0916 10:21:43.478893   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.478751   12287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0916 10:21:43.478937   12265 main.go:141] libmachine: (addons-001438) DBG | created network xml: 
	I0916 10:21:43.478958   12265 main.go:141] libmachine: (addons-001438) DBG | <network>
	I0916 10:21:43.478967   12265 main.go:141] libmachine: (addons-001438) DBG |   <name>mk-addons-001438</name>
	I0916 10:21:43.478974   12265 main.go:141] libmachine: (addons-001438) DBG |   <dns enable='no'/>
	I0916 10:21:43.478986   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.478998   12265 main.go:141] libmachine: (addons-001438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:21:43.479006   12265 main.go:141] libmachine: (addons-001438) DBG |     <dhcp>
	I0916 10:21:43.479018   12265 main.go:141] libmachine: (addons-001438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:21:43.479026   12265 main.go:141] libmachine: (addons-001438) DBG |     </dhcp>
	I0916 10:21:43.479036   12265 main.go:141] libmachine: (addons-001438) DBG |   </ip>
	I0916 10:21:43.479087   12265 main.go:141] libmachine: (addons-001438) DBG |   
	I0916 10:21:43.479109   12265 main.go:141] libmachine: (addons-001438) DBG | </network>
	I0916 10:21:43.479150   12265 main.go:141] libmachine: (addons-001438) DBG | 
	I0916 10:21:43.484546   12265 main.go:141] libmachine: (addons-001438) DBG | trying to create private KVM network mk-addons-001438 192.168.39.0/24...
	I0916 10:21:43.547822   12265 main.go:141] libmachine: (addons-001438) DBG | private KVM network mk-addons-001438 192.168.39.0/24 created
	I0916 10:21:43.547845   12265 main.go:141] libmachine: (addons-001438) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.547862   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.547813   12287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.547875   12265 main.go:141] libmachine: (addons-001438) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:43.547936   12265 main.go:141] libmachine: (addons-001438) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:21:43.797047   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.796916   12287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa...
	I0916 10:21:43.906021   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.905909   12287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk...
	I0916 10:21:43.906051   12265 main.go:141] libmachine: (addons-001438) DBG | Writing magic tar header
	I0916 10:21:43.906060   12265 main.go:141] libmachine: (addons-001438) DBG | Writing SSH key tar header
	I0916 10:21:43.906067   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:43.906027   12287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 ...
	I0916 10:21:43.906123   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438
	I0916 10:21:43.906172   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438 (perms=drwx------)
	I0916 10:21:43.906194   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:21:43.906204   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:21:43.906222   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:21:43.906230   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:43.906236   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:21:43.906243   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:21:43.906248   12265 main.go:141] libmachine: (addons-001438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:21:43.906258   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:43.906264   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:21:43.906275   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:21:43.906309   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:21:43.906325   12265 main.go:141] libmachine: (addons-001438) DBG | Checking permissions on dir: /home
	I0916 10:21:43.906338   12265 main.go:141] libmachine: (addons-001438) DBG | Skipping /home - not owner
	I0916 10:21:43.907204   12265 main.go:141] libmachine: (addons-001438) define libvirt domain using xml: 
	I0916 10:21:43.907223   12265 main.go:141] libmachine: (addons-001438) <domain type='kvm'>
	I0916 10:21:43.907235   12265 main.go:141] libmachine: (addons-001438)   <name>addons-001438</name>
	I0916 10:21:43.907246   12265 main.go:141] libmachine: (addons-001438)   <memory unit='MiB'>4000</memory>
	I0916 10:21:43.907255   12265 main.go:141] libmachine: (addons-001438)   <vcpu>2</vcpu>
	I0916 10:21:43.907265   12265 main.go:141] libmachine: (addons-001438)   <features>
	I0916 10:21:43.907274   12265 main.go:141] libmachine: (addons-001438)     <acpi/>
	I0916 10:21:43.907282   12265 main.go:141] libmachine: (addons-001438)     <apic/>
	I0916 10:21:43.907294   12265 main.go:141] libmachine: (addons-001438)     <pae/>
	I0916 10:21:43.907307   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907318   12265 main.go:141] libmachine: (addons-001438)   </features>
	I0916 10:21:43.907327   12265 main.go:141] libmachine: (addons-001438)   <cpu mode='host-passthrough'>
	I0916 10:21:43.907337   12265 main.go:141] libmachine: (addons-001438)   
	I0916 10:21:43.907349   12265 main.go:141] libmachine: (addons-001438)   </cpu>
	I0916 10:21:43.907364   12265 main.go:141] libmachine: (addons-001438)   <os>
	I0916 10:21:43.907373   12265 main.go:141] libmachine: (addons-001438)     <type>hvm</type>
	I0916 10:21:43.907383   12265 main.go:141] libmachine: (addons-001438)     <boot dev='cdrom'/>
	I0916 10:21:43.907392   12265 main.go:141] libmachine: (addons-001438)     <boot dev='hd'/>
	I0916 10:21:43.907402   12265 main.go:141] libmachine: (addons-001438)     <bootmenu enable='no'/>
	I0916 10:21:43.907415   12265 main.go:141] libmachine: (addons-001438)   </os>
	I0916 10:21:43.907427   12265 main.go:141] libmachine: (addons-001438)   <devices>
	I0916 10:21:43.907435   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='cdrom'>
	I0916 10:21:43.907452   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/boot2docker.iso'/>
	I0916 10:21:43.907463   12265 main.go:141] libmachine: (addons-001438)       <target dev='hdc' bus='scsi'/>
	I0916 10:21:43.907489   12265 main.go:141] libmachine: (addons-001438)       <readonly/>
	I0916 10:21:43.907508   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907518   12265 main.go:141] libmachine: (addons-001438)     <disk type='file' device='disk'>
	I0916 10:21:43.907531   12265 main.go:141] libmachine: (addons-001438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:21:43.907547   12265 main.go:141] libmachine: (addons-001438)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/addons-001438.rawdisk'/>
	I0916 10:21:43.907558   12265 main.go:141] libmachine: (addons-001438)       <target dev='hda' bus='virtio'/>
	I0916 10:21:43.907568   12265 main.go:141] libmachine: (addons-001438)     </disk>
	I0916 10:21:43.907583   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907595   12265 main.go:141] libmachine: (addons-001438)       <source network='mk-addons-001438'/>
	I0916 10:21:43.907606   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907616   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907624   12265 main.go:141] libmachine: (addons-001438)     <interface type='network'>
	I0916 10:21:43.907634   12265 main.go:141] libmachine: (addons-001438)       <source network='default'/>
	I0916 10:21:43.907645   12265 main.go:141] libmachine: (addons-001438)       <model type='virtio'/>
	I0916 10:21:43.907667   12265 main.go:141] libmachine: (addons-001438)     </interface>
	I0916 10:21:43.907687   12265 main.go:141] libmachine: (addons-001438)     <serial type='pty'>
	I0916 10:21:43.907697   12265 main.go:141] libmachine: (addons-001438)       <target port='0'/>
	I0916 10:21:43.907706   12265 main.go:141] libmachine: (addons-001438)     </serial>
	I0916 10:21:43.907717   12265 main.go:141] libmachine: (addons-001438)     <console type='pty'>
	I0916 10:21:43.907735   12265 main.go:141] libmachine: (addons-001438)       <target type='serial' port='0'/>
	I0916 10:21:43.907745   12265 main.go:141] libmachine: (addons-001438)     </console>
	I0916 10:21:43.907758   12265 main.go:141] libmachine: (addons-001438)     <rng model='virtio'>
	I0916 10:21:43.907772   12265 main.go:141] libmachine: (addons-001438)       <backend model='random'>/dev/random</backend>
	I0916 10:21:43.907777   12265 main.go:141] libmachine: (addons-001438)     </rng>
	I0916 10:21:43.907785   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907794   12265 main.go:141] libmachine: (addons-001438)     
	I0916 10:21:43.907804   12265 main.go:141] libmachine: (addons-001438)   </devices>
	I0916 10:21:43.907814   12265 main.go:141] libmachine: (addons-001438) </domain>
	I0916 10:21:43.907826   12265 main.go:141] libmachine: (addons-001438) 
	I0916 10:21:43.913322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:98:e7:17 in network default
	I0916 10:21:43.913924   12265 main.go:141] libmachine: (addons-001438) Ensuring networks are active...
	I0916 10:21:43.913942   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:43.914588   12265 main.go:141] libmachine: (addons-001438) Ensuring network default is active
	I0916 10:21:43.914879   12265 main.go:141] libmachine: (addons-001438) Ensuring network mk-addons-001438 is active
	I0916 10:21:43.915337   12265 main.go:141] libmachine: (addons-001438) Getting domain xml...
	I0916 10:21:43.915987   12265 main.go:141] libmachine: (addons-001438) Creating domain...
	I0916 10:21:45.289678   12265 main.go:141] libmachine: (addons-001438) Waiting to get IP...
	I0916 10:21:45.290387   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.290811   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.290836   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.290776   12287 retry.go:31] will retry after 253.823507ms: waiting for machine to come up
	I0916 10:21:45.546308   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.546737   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.546757   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.546713   12287 retry.go:31] will retry after 316.98215ms: waiting for machine to come up
	I0916 10:21:45.865275   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:45.865712   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:45.865742   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:45.865673   12287 retry.go:31] will retry after 438.875906ms: waiting for machine to come up
	I0916 10:21:46.306361   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.306829   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.306854   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.306787   12287 retry.go:31] will retry after 378.922529ms: waiting for machine to come up
	I0916 10:21:46.687272   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:46.687683   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:46.687718   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:46.687648   12287 retry.go:31] will retry after 695.664658ms: waiting for machine to come up
	I0916 10:21:47.384623   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:47.385017   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:47.385044   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:47.384985   12287 retry.go:31] will retry after 669.1436ms: waiting for machine to come up
	I0916 10:21:48.056603   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.057159   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.057183   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.057099   12287 retry.go:31] will retry after 739.217064ms: waiting for machine to come up
	I0916 10:21:48.798348   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:48.798788   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:48.798824   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:48.798748   12287 retry.go:31] will retry after 963.828739ms: waiting for machine to come up
	I0916 10:21:49.763677   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:49.764095   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:49.764120   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:49.764043   12287 retry.go:31] will retry after 1.625531991s: waiting for machine to come up
	I0916 10:21:51.391980   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:51.392322   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:51.392343   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:51.392285   12287 retry.go:31] will retry after 1.960554167s: waiting for machine to come up
	I0916 10:21:53.354469   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:53.354989   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:53.355016   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:53.354937   12287 retry.go:31] will retry after 2.035806393s: waiting for machine to come up
	I0916 10:21:55.393065   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:55.393432   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:55.393451   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:55.393400   12287 retry.go:31] will retry after 3.028756428s: waiting for machine to come up
	I0916 10:21:58.424174   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:21:58.424544   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:21:58.424577   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:21:58.424517   12287 retry.go:31] will retry after 3.769682763s: waiting for machine to come up
	I0916 10:22:02.198084   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:02.198470   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find current IP address of domain addons-001438 in network mk-addons-001438
	I0916 10:22:02.198492   12265 main.go:141] libmachine: (addons-001438) DBG | I0916 10:22:02.198430   12287 retry.go:31] will retry after 5.547519077s: waiting for machine to come up
	I0916 10:22:07.750830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751191   12265 main.go:141] libmachine: (addons-001438) Found IP for machine: 192.168.39.72
	I0916 10:22:07.751209   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has current primary IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.751215   12265 main.go:141] libmachine: (addons-001438) Reserving static IP address...
	I0916 10:22:07.751548   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "addons-001438", mac: "52:54:00:9c:55:19", ip: "192.168.39.72"} in network mk-addons-001438
	I0916 10:22:07.821469   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:07.821506   12265 main.go:141] libmachine: (addons-001438) Reserved static IP address: 192.168.39.72
	I0916 10:22:07.821523   12265 main.go:141] libmachine: (addons-001438) Waiting for SSH to be available...
	I0916 10:22:07.823797   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:07.824029   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438
	I0916 10:22:07.824057   12265 main.go:141] libmachine: (addons-001438) DBG | unable to find defined IP address of network mk-addons-001438 interface with MAC address 52:54:00:9c:55:19
	I0916 10:22:07.824199   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:07.824226   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:07.824261   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:07.824273   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:07.824297   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:07.835394   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: exit status 255: 
	I0916 10:22:07.835415   12265 main.go:141] libmachine: (addons-001438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 10:22:07.835421   12265 main.go:141] libmachine: (addons-001438) DBG | command : exit 0
	I0916 10:22:07.835428   12265 main.go:141] libmachine: (addons-001438) DBG | err     : exit status 255
	I0916 10:22:07.835435   12265 main.go:141] libmachine: (addons-001438) DBG | output  : 
	I0916 10:22:10.838181   12265 main.go:141] libmachine: (addons-001438) DBG | Getting to WaitForSSH function...
	I0916 10:22:10.840410   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840805   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.840830   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.840953   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH client type: external
	I0916 10:22:10.840980   12265 main.go:141] libmachine: (addons-001438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa (-rw-------)
	I0916 10:22:10.841012   12265 main.go:141] libmachine: (addons-001438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:22:10.841026   12265 main.go:141] libmachine: (addons-001438) DBG | About to run SSH command:
	I0916 10:22:10.841039   12265 main.go:141] libmachine: (addons-001438) DBG | exit 0
	I0916 10:22:10.969218   12265 main.go:141] libmachine: (addons-001438) DBG | SSH cmd err, output: <nil>: 
	I0916 10:22:10.969498   12265 main.go:141] libmachine: (addons-001438) KVM machine creation complete!
	I0916 10:22:10.969791   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:10.970351   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970568   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:10.970704   12265 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:22:10.970716   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:10.971844   12265 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:22:10.971857   12265 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:22:10.971863   12265 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:22:10.971871   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:10.973963   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974287   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:10.974322   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:10.974443   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:10.974600   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974766   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:10.974897   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:10.975056   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:10.975258   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:10.975270   12265 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:22:11.084303   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.084322   12265 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:22:11.084329   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.086985   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087399   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.087449   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.087637   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.087805   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.087957   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.088052   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.088212   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.088404   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.088420   12265 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:22:11.197622   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:22:11.197666   12265 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:22:11.197674   12265 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:22:11.197683   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.197922   12265 buildroot.go:166] provisioning hostname "addons-001438"
	I0916 10:22:11.197936   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.198131   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.200614   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.200955   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.200988   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.201100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.201269   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201396   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.201536   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.201681   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.201878   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.201891   12265 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-001438 && echo "addons-001438" | sudo tee /etc/hostname
	I0916 10:22:11.329393   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-001438
	
	I0916 10:22:11.329423   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.332085   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332370   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.332397   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.332557   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.332746   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332868   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.332999   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.333118   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.333336   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.333353   12265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-001438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-001438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-001438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:22:11.454462   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:22:11.454486   12265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:22:11.454539   12265 buildroot.go:174] setting up certificates
	I0916 10:22:11.454553   12265 provision.go:84] configureAuth start
	I0916 10:22:11.454562   12265 main.go:141] libmachine: (addons-001438) Calling .GetMachineName
	I0916 10:22:11.454823   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:11.457458   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.457872   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.457902   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.458065   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.460166   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460456   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.460484   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.460579   12265 provision.go:143] copyHostCerts
	I0916 10:22:11.460674   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:22:11.460835   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:22:11.460925   12265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:22:11.460997   12265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.addons-001438 san=[127.0.0.1 192.168.39.72 addons-001438 localhost minikube]
	I0916 10:22:11.639072   12265 provision.go:177] copyRemoteCerts
	I0916 10:22:11.639141   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:22:11.639169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.641767   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642050   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.642076   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.642240   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.642415   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.642519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.642635   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:11.727509   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:22:11.752436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:22:11.776436   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:22:11.799597   12265 provision.go:87] duration metric: took 345.032702ms to configureAuth
	I0916 10:22:11.799626   12265 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:22:11.799813   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:11.799904   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:11.802386   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:11.802700   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:11.802854   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:11.803047   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803187   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:11.803323   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:11.803504   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:11.803689   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:11.803704   12265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:22:12.030350   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:22:12.030374   12265 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:22:12.030382   12265 main.go:141] libmachine: (addons-001438) Calling .GetURL
	I0916 10:22:12.031607   12265 main.go:141] libmachine: (addons-001438) DBG | Using libvirt version 6000000
	I0916 10:22:12.034008   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034296   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.034325   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.034451   12265 main.go:141] libmachine: Docker is up and running!
	I0916 10:22:12.034463   12265 main.go:141] libmachine: Reticulating splines...
	I0916 10:22:12.034470   12265 client.go:171] duration metric: took 28.959474569s to LocalClient.Create
	I0916 10:22:12.034491   12265 start.go:167] duration metric: took 28.959547297s to libmachine.API.Create "addons-001438"
	I0916 10:22:12.034500   12265 start.go:293] postStartSetup for "addons-001438" (driver="kvm2")
	I0916 10:22:12.034509   12265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:22:12.034535   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.034731   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:22:12.034762   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.036747   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037041   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.037068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.037200   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.037344   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.037486   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.037623   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.123403   12265 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:22:12.127815   12265 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:22:12.127838   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:22:12.127904   12265 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:22:12.127926   12265 start.go:296] duration metric: took 93.420957ms for postStartSetup
	I0916 10:22:12.127955   12265 main.go:141] libmachine: (addons-001438) Calling .GetConfigRaw
	I0916 10:22:12.128519   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.131232   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131510   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.131547   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.131776   12265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/config.json ...
	I0916 10:22:12.131949   12265 start.go:128] duration metric: took 29.075237515s to createHost
	I0916 10:22:12.131975   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.133967   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134281   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.134305   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.134418   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.134606   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134753   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.134877   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.135036   12265 main.go:141] libmachine: Using SSH client type: native
	I0916 10:22:12.135185   12265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0916 10:22:12.135202   12265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:22:12.245734   12265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482132.226578519
	
	I0916 10:22:12.245757   12265 fix.go:216] guest clock: 1726482132.226578519
	I0916 10:22:12.245764   12265 fix.go:229] Guest: 2024-09-16 10:22:12.226578519 +0000 UTC Remote: 2024-09-16 10:22:12.131960304 +0000 UTC m=+29.174301435 (delta=94.618215ms)
	I0916 10:22:12.245784   12265 fix.go:200] guest clock delta is within tolerance: 94.618215ms
	I0916 10:22:12.245790   12265 start.go:83] releasing machines lock for "addons-001438", held for 29.189143417s
	I0916 10:22:12.245809   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.246014   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:12.248419   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248678   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.248704   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.248832   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249314   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249485   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:12.249586   12265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:22:12.249653   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.249707   12265 ssh_runner.go:195] Run: cat /version.json
	I0916 10:22:12.249728   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:12.252249   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252497   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252634   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252657   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.252757   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.252904   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:12.252922   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.252925   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:12.253038   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:12.253093   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253241   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:12.253258   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.253386   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:12.253515   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:12.362639   12265 ssh_runner.go:195] Run: systemctl --version
	I0916 10:22:12.368512   12265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:22:12.527002   12265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:22:12.532733   12265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:22:12.532791   12265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:22:12.548743   12265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:22:12.548773   12265 start.go:495] detecting cgroup driver to use...
	I0916 10:22:12.548843   12265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:22:12.564219   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:22:12.578224   12265 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:22:12.578276   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:22:12.591434   12265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:22:12.604674   12265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:22:12.712713   12265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:22:12.868881   12265 docker.go:233] disabling docker service ...
	I0916 10:22:12.868945   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:22:12.883262   12265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:22:12.896034   12265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:22:13.009183   12265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:22:13.123591   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:22:13.137411   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:22:13.155768   12265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:22:13.155832   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.166378   12265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:22:13.166436   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.177199   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.187753   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.198460   12265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:22:13.209356   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.220222   12265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.237721   12265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:22:13.247992   12265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:22:13.257214   12265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:22:13.257274   12265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:22:13.269843   12265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:22:13.279361   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:13.392424   12265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:22:13.489919   12265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:22:13.490002   12265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:22:13.495269   12265 start.go:563] Will wait 60s for crictl version
	I0916 10:22:13.495342   12265 ssh_runner.go:195] Run: which crictl
	I0916 10:22:13.499375   12265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:22:13.543037   12265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:22:13.543161   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.571422   12265 ssh_runner.go:195] Run: crio --version
	I0916 10:22:13.600892   12265 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:22:13.602164   12265 main.go:141] libmachine: (addons-001438) Calling .GetIP
	I0916 10:22:13.604725   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605053   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:13.605090   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:13.605239   12265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:22:13.609153   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:13.621451   12265 kubeadm.go:883] updating cluster {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:22:13.621560   12265 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:22:13.621616   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:13.653616   12265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:22:13.653695   12265 ssh_runner.go:195] Run: which lz4
	I0916 10:22:13.657722   12265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:22:13.661843   12265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:22:13.661873   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:22:14.968986   12265 crio.go:462] duration metric: took 1.311298771s to copy over tarball
	I0916 10:22:14.969053   12265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:22:17.073836   12265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104757919s)
	I0916 10:22:17.073872   12265 crio.go:469] duration metric: took 2.104858266s to extract the tarball
	I0916 10:22:17.073881   12265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:22:17.110316   12265 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:22:17.150207   12265 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:22:17.150233   12265 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:22:17.150241   12265 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.1 crio true true} ...
	I0916 10:22:17.150343   12265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-001438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:22:17.150424   12265 ssh_runner.go:195] Run: crio config
	I0916 10:22:17.195725   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:17.195746   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:17.195756   12265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:22:17.195774   12265 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-001438 NodeName:addons-001438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:22:17.195915   12265 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-001438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:22:17.195969   12265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:22:17.206079   12265 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:22:17.206139   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:22:17.215719   12265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 10:22:17.232125   12265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:22:17.248126   12265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 10:22:17.264165   12265 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0916 10:22:17.267727   12265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:22:17.279787   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:17.393283   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:17.410756   12265 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438 for IP: 192.168.39.72
	I0916 10:22:17.410774   12265 certs.go:194] generating shared ca certs ...
	I0916 10:22:17.410794   12265 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.410949   12265 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:22:17.480758   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt ...
	I0916 10:22:17.480787   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt: {Name:mkc291c3a986acc7f4de9183c4ef6d249d8de5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.480965   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key ...
	I0916 10:22:17.480980   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key: {Name:mk56bc8b146d891ba5f741ad0bd339fffdb85989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.481075   12265 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:22:17.673219   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt ...
	I0916 10:22:17.673250   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt: {Name:mk8d6878492eab0d99f630fc495324e3b843781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673403   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key ...
	I0916 10:22:17.673414   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key: {Name:mk082b50320d253da8f01ad2454b69492e000fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.673482   12265 certs.go:256] generating profile certs ...
	I0916 10:22:17.673531   12265 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key
	I0916 10:22:17.673544   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt with IP's: []
	I0916 10:22:17.921779   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt ...
	I0916 10:22:17.921811   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: {Name:mk9172b9e8f20da0dd399e583d4f0391784c25bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.921970   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key ...
	I0916 10:22:17.921981   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.key: {Name:mk65d84f1710f9ab616402324cb2a91f749aa3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.922048   12265 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03
	I0916 10:22:17.922066   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0916 10:22:17.984449   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 ...
	I0916 10:22:17.984473   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03: {Name:mk697c0092db030ad4df50333f6d1db035d298e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984627   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 ...
	I0916 10:22:17.984638   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03: {Name:mkf74035add612ea1883fde9b662a919a8d7c5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:17.984705   12265 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt
	I0916 10:22:17.984774   12265 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key.b670da03 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key
	I0916 10:22:17.984818   12265 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key
	I0916 10:22:17.984834   12265 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt with IP's: []
	I0916 10:22:18.105094   12265 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt ...
	I0916 10:22:18.105122   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt: {Name:mk12379583893d02aa599284bf7c2e673e4a585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105290   12265 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key ...
	I0916 10:22:18.105300   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key: {Name:mkddc10c89aa36609a41c940a83606fa36ac69df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:18.105453   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:22:18.105484   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:22:18.105509   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:22:18.105531   12265 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:22:18.106125   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:22:18.132592   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:22:18.173674   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:22:18.200455   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:22:18.223366   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:22:18.246242   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:22:18.269411   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:22:18.292157   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:22:18.314508   12265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:22:18.337365   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:22:18.353286   12265 ssh_runner.go:195] Run: openssl version
	I0916 10:22:18.358942   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:22:18.369103   12265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373299   12265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.373346   12265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:22:18.378948   12265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:22:18.389436   12265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:22:18.393342   12265 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:22:18.393387   12265 kubeadm.go:392] StartCluster: {Name:addons-001438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-001438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:18.393452   12265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:22:18.393509   12265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:22:18.429056   12265 cri.go:89] found id: ""
	I0916 10:22:18.429118   12265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:22:18.439123   12265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:22:18.448797   12265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:22:18.458281   12265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:22:18.458303   12265 kubeadm.go:157] found existing configuration files:
	
	I0916 10:22:18.458357   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:22:18.467304   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:22:18.467373   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:22:18.476476   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:22:18.485402   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:22:18.485467   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:22:18.494643   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.503578   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:22:18.503657   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:22:18.512633   12265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:22:18.521391   12265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:22:18.521454   12265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:22:18.530381   12265 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:22:18.584992   12265 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:22:18.585058   12265 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:22:18.700906   12265 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:22:18.701050   12265 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:22:18.701195   12265 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:22:18.712665   12265 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:22:18.808124   12265 out.go:235]   - Generating certificates and keys ...
	I0916 10:22:18.808238   12265 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:22:18.808308   12265 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:22:18.808390   12265 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:22:18.884612   12265 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:22:19.103481   12265 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:22:19.230175   12265 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:22:19.422850   12265 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:22:19.423077   12265 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.499430   12265 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:22:19.499746   12265 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-001438 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0916 10:22:19.689533   12265 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:22:19.770560   12265 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:22:20.159783   12265 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:22:20.160053   12265 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:22:20.575897   12265 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:22:20.728566   12265 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:22:21.092038   12265 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:22:21.382957   12265 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:22:21.446452   12265 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:22:21.447068   12265 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:22:21.451577   12265 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:22:21.454426   12265 out.go:235]   - Booting up control plane ...
	I0916 10:22:21.454540   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:22:21.454614   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:22:21.454722   12265 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:22:21.468531   12265 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:22:21.475700   12265 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:22:21.475767   12265 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:22:21.606009   12265 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:22:21.606143   12265 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:22:22.124369   12265 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.881759ms
	I0916 10:22:22.124492   12265 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:22:27.123389   12265 kubeadm.go:310] [api-check] The API server is healthy after 5.002163965s
	I0916 10:22:27.138636   12265 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:22:27.154171   12265 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:22:27.185604   12265 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:22:27.185839   12265 kubeadm.go:310] [mark-control-plane] Marking the node addons-001438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:22:27.198602   12265 kubeadm.go:310] [bootstrap-token] Using token: os1o8m.q16efzg2rjnkpln8
	I0916 10:22:27.199966   12265 out.go:235]   - Configuring RBAC rules ...
	I0916 10:22:27.200085   12265 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:22:27.209733   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:22:27.218630   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:22:27.222473   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:22:27.226151   12265 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:22:27.230516   12265 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:22:27.529586   12265 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:22:27.967178   12265 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:22:28.529936   12265 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:22:28.529960   12265 kubeadm.go:310] 
	I0916 10:22:28.530028   12265 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:22:28.530044   12265 kubeadm.go:310] 
	I0916 10:22:28.530137   12265 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:22:28.530173   12265 kubeadm.go:310] 
	I0916 10:22:28.530227   12265 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:22:28.530307   12265 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:22:28.530390   12265 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:22:28.530397   12265 kubeadm.go:310] 
	I0916 10:22:28.530463   12265 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:22:28.530472   12265 kubeadm.go:310] 
	I0916 10:22:28.530525   12265 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:22:28.530537   12265 kubeadm.go:310] 
	I0916 10:22:28.530609   12265 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:22:28.530728   12265 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:22:28.530832   12265 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:22:28.530868   12265 kubeadm.go:310] 
	I0916 10:22:28.530981   12265 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:22:28.531080   12265 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:22:28.531091   12265 kubeadm.go:310] 
	I0916 10:22:28.531215   12265 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531358   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:22:28.531389   12265 kubeadm.go:310] 	--control-plane 
	I0916 10:22:28.531397   12265 kubeadm.go:310] 
	I0916 10:22:28.531518   12265 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:22:28.531528   12265 kubeadm.go:310] 
	I0916 10:22:28.531639   12265 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token os1o8m.q16efzg2rjnkpln8 \
	I0916 10:22:28.531783   12265 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:22:28.532220   12265 kubeadm.go:310] W0916 10:22:18.568727     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532498   12265 kubeadm.go:310] W0916 10:22:18.569597     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:22:28.532623   12265 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:22:28.532635   12265 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.532642   12265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:22:28.534239   12265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:22:28.535682   12265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:22:28.547306   12265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:22:28.567029   12265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:22:28.567083   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:28.567116   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-001438 minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-001438 minikube.k8s.io/primary=true
	I0916 10:22:28.599898   12265 ops.go:34] apiserver oom_adj: -16
	I0916 10:22:28.718193   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.219097   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:29.718331   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.219213   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:30.718728   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.218997   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:31.719218   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.218543   12265 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:22:32.335651   12265 kubeadm.go:1113] duration metric: took 3.768632423s to wait for elevateKubeSystemPrivileges
	I0916 10:22:32.335685   12265 kubeadm.go:394] duration metric: took 13.942299744s to StartCluster
	I0916 10:22:32.335709   12265 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.335851   12265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:22:32.336274   12265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:32.336491   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:22:32.336522   12265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:22:32.336653   12265 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:22:32.336724   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.336769   12265 addons.go:69] Setting default-storageclass=true in profile "addons-001438"
	I0916 10:22:32.336779   12265 addons.go:69] Setting ingress-dns=true in profile "addons-001438"
	I0916 10:22:32.336787   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-001438"
	I0916 10:22:32.336780   12265 addons.go:69] Setting ingress=true in profile "addons-001438"
	I0916 10:22:32.336793   12265 addons.go:69] Setting cloud-spanner=true in profile "addons-001438"
	I0916 10:22:32.336813   12265 addons.go:69] Setting inspektor-gadget=true in profile "addons-001438"
	I0916 10:22:32.336820   12265 addons.go:69] Setting gcp-auth=true in profile "addons-001438"
	I0916 10:22:32.336832   12265 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-001438"
	I0916 10:22:32.336835   12265 addons.go:234] Setting addon cloud-spanner=true in "addons-001438"
	I0916 10:22:32.336828   12265 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-001438"
	I0916 10:22:32.336844   12265 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-001438"
	I0916 10:22:32.336825   12265 addons.go:234] Setting addon inspektor-gadget=true in "addons-001438"
	I0916 10:22:32.336853   12265 addons.go:69] Setting registry=true in profile "addons-001438"
	I0916 10:22:32.336867   12265 addons.go:234] Setting addon registry=true in "addons-001438"
	I0916 10:22:32.336883   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336888   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336896   12265 addons.go:69] Setting helm-tiller=true in profile "addons-001438"
	I0916 10:22:32.336908   12265 addons.go:234] Setting addon helm-tiller=true in "addons-001438"
	I0916 10:22:32.336937   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336940   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.336844   12265 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-001438"
	I0916 10:22:32.337250   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337262   12265 addons.go:69] Setting volcano=true in profile "addons-001438"
	I0916 10:22:32.337273   12265 addons.go:234] Setting addon volcano=true in "addons-001438"
	I0916 10:22:32.337281   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337295   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337315   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336808   12265 addons.go:234] Setting addon ingress=true in "addons-001438"
	I0916 10:22:32.337347   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337348   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337365   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337367   12265 addons.go:69] Setting volumesnapshots=true in profile "addons-001438"
	I0916 10:22:32.337379   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337381   12265 addons.go:234] Setting addon volumesnapshots=true in "addons-001438"
	I0916 10:22:32.337412   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.336888   12265 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:32.337442   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336769   12265 addons.go:69] Setting yakd=true in profile "addons-001438"
	I0916 10:22:32.337489   12265 addons.go:234] Setting addon yakd=true in "addons-001438"
	I0916 10:22:32.337633   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337660   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336835   12265 addons.go:69] Setting metrics-server=true in profile "addons-001438"
	I0916 10:22:32.337353   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.337714   12265 addons.go:234] Setting addon metrics-server=true in "addons-001438"
	I0916 10:22:32.337741   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.337700   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337795   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336844   12265 mustload.go:65] Loading cluster: addons-001438
	I0916 10:22:32.336824   12265 addons.go:69] Setting storage-provisioner=true in profile "addons-001438"
	I0916 10:22:32.337840   12265 addons.go:234] Setting addon storage-provisioner=true in "addons-001438"
	I0916 10:22:32.337328   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.337881   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.336804   12265 addons.go:234] Setting addon ingress-dns=true in "addons-001438"
	I0916 10:22:32.337251   12265 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-001438"
	I0916 10:22:32.337944   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338072   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338099   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338127   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338301   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338331   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338413   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338421   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338448   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338455   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338446   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.338765   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338792   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338818   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.338845   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.338995   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.339304   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.339363   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.342405   12265 out.go:177] * Verifying Kubernetes components...
	I0916 10:22:32.343665   12265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:22:32.357106   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0916 10:22:32.357444   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0916 10:22:32.357655   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0916 10:22:32.357797   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.357897   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358211   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.358403   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358419   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358562   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358574   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.358633   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0916 10:22:32.358790   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.358949   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.358960   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.359007   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0916 10:22:32.369699   12265 config.go:182] Loaded profile config "addons-001438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:22:32.369748   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.369818   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370020   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370060   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370069   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370101   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370194   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.370269   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370379   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.370390   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.370789   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.370827   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.370908   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.370969   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.371094   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.371111   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.371475   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371508   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371573   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.371638   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.371663   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.371731   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.386697   12265 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-001438"
	I0916 10:22:32.386747   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.386763   12265 addons.go:234] Setting addon default-storageclass=true in "addons-001438"
	I0916 10:22:32.386810   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.387114   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387173   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.387252   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.387291   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.408433   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0916 10:22:32.409200   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.409836   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.409856   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.410249   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.410830   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.410872   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.411145   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0916 10:22:32.411578   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.413298   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.413319   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.414168   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0916 10:22:32.414190   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0916 10:22:32.414292   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36809
	I0916 10:22:32.414570   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.414671   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.415178   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.415195   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.415681   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.416214   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.416252   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.416442   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0916 10:22:32.416592   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417197   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.417231   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.417415   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0916 10:22:32.417454   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417595   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.417608   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.417843   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.417917   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418037   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.418050   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.418410   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.418443   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.418409   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.418501   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.419031   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.419065   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.419266   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419281   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419404   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.419414   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.419702   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.419847   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.420545   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.421091   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.421133   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.421574   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.421979   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0916 10:22:32.422963   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.423382   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.423399   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.423697   12265 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:22:32.423813   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.424320   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.424354   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.425846   12265 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:22:32.425941   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 10:22:32.426062   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0916 10:22:32.426213   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0916 10:22:32.426381   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426757   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.426931   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.426942   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.426976   12265 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:22:32.426992   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:22:32.427011   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.427391   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.427470   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.427489   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.427946   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.428354   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428385   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.428598   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.428889   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.428924   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.429188   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.429202   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.429517   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.431934   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0916 10:22:32.431987   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432541   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.432563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.432751   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.432883   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.432998   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.433120   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.433712   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.435531   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.435730   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435742   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.435888   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.435899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:32.435907   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:32.435913   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:32.436070   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:32.436085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 10:22:32.436166   12265 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 10:22:32.440699   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0916 10:22:32.441072   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.441617   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.441644   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.441979   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.442497   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.442531   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.450769   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0916 10:22:32.451259   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.451700   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.451718   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.452549   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.453092   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.453146   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.454430   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0916 10:22:32.454743   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455061   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.455149   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0916 10:22:32.455842   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455847   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.455860   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455871   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.455922   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.456243   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456542   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.456622   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.456639   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.456747   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.457901   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0916 10:22:32.458037   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.458209   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.458254   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.458704   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.458721   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.459089   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.459296   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.459533   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.460121   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:32.460511   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.460545   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.460978   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0916 10:22:32.461180   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.461244   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.461735   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.461753   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.461805   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.462195   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0916 10:22:32.462331   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.462809   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.464034   12265 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:22:32.464150   12265 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:22:32.464278   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.464668   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.464696   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.465237   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.466010   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.465566   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0916 10:22:32.466246   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:22:32.466259   12265 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:22:32.466276   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467014   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.467145   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.467235   12265 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:22:32.467359   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:22:32.467370   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:22:32.467385   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.467696   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.467711   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.468100   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.468326   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.468710   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:22:32.468725   12265 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:22:32.468742   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.468966   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:22:32.469146   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.469463   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.469917   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:32.469918   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.470000   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.470971   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0916 10:22:32.471473   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.471695   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.472001   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.472015   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.472269   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:22:32.472471   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472523   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0916 10:22:32.472664   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.472783   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.472993   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.473106   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.473134   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.473329   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:32.473377   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:32.473597   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.473743   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.473790   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.473851   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.474147   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:32.474163   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:22:32.474178   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.474793   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.474941   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.474955   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.475234   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.475510   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.475619   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475650   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.475665   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.475824   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.476100   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.476264   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.476604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.476644   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.476828   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.476940   12265 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:22:32.477612   12265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:22:32.478260   12265 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.478276   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:22:32.478291   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.478585   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.478604   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.478880   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.479035   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.479168   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.479299   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.479921   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.479937   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:22:32.479951   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.480259   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.480742   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.481958   12265 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:22:32.482834   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0916 10:22:32.482998   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483118   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483310   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.483473   12265 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:22:32.483494   12265 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:22:32.483512   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.483802   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.483828   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.483888   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483903   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.483899   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.483930   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.484092   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.484159   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484194   12265 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:22:32.484411   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.484581   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.484636   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.484681   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.484892   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.484958   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.485096   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.485218   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.485248   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.485262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.485372   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.485494   12265 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.485505   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:22:32.485519   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.485781   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.486028   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.486181   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.486318   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.487186   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487422   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.487675   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.487695   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.487742   12265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.487752   12265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:22:32.487764   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.487810   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.487995   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.488225   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.488378   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.489702   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490168   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.490188   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.490394   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.490571   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.490713   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.490823   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.492068   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492458   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.492479   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.492686   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.492906   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.492915   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0916 10:22:32.493044   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.493239   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.493450   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.493933   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.493950   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.494562   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.494891   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.496932   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.498147   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0916 10:22:32.498828   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:22:32.499232   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.499608   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.499634   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.499936   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.500124   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.500215   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:22:32.500241   12265 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:22:32.500262   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.501763   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.503323   12265 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:22:32.503738   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504260   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.504287   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.504422   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.504578   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.504721   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.504800   12265 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:32.504813   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:22:32.504828   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.504844   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.507073   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0916 10:22:32.507489   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.507971   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.507994   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.508014   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0916 10:22:32.508383   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.508455   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0916 10:22:32.508996   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.509012   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509054   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509082   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:32.509517   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.509552   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.509551   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.509573   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.509882   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510086   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:32.510151   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:32.510169   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:32.510570   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.510576   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:32.510696   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.510739   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.510822   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.510947   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.511685   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.511711   12265 retry.go:31] will retry after 323.390168ms: ssh: handshake failed: read tcp 192.168.39.1:43352->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.513110   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.513548   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:32.515216   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:22:32.516467   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:22:32.517228   12265 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:22:32.518463   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:22:32.519691   12265 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:22:32.521193   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:22:32.521287   12265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:32.521309   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:22:32.521330   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.523957   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:22:32.524563   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.524915   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.524939   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.525078   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.525271   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.525408   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.525548   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	W0916 10:22:32.526174   12265 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526199   12265 retry.go:31] will retry after 208.869548ms: ssh: handshake failed: read tcp 192.168.39.1:43362->192.168.39.72:22: read: connection reset by peer
	I0916 10:22:32.526327   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:22:32.527568   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:22:32.528811   12265 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:22:32.530140   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:22:32.530154   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:22:32.530169   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:32.533281   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533666   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:32.533688   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:32.533886   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:32.534072   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:32.534227   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:32.534367   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:32.700911   12265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:22:32.700984   12265 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:22:32.785482   12265 node_ready.go:35] waiting up to 6m0s for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822842   12265 node_ready.go:49] node "addons-001438" has status "Ready":"True"
	I0916 10:22:32.822881   12265 node_ready.go:38] duration metric: took 37.361645ms for node "addons-001438" to be "Ready" ...
	I0916 10:22:32.822895   12265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:32.861506   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:22:32.861543   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:22:32.862634   12265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:32.929832   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:22:32.943014   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:22:32.952437   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:22:32.991347   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:22:32.995067   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:22:32.995096   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:22:33.036627   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:22:33.036657   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:22:33.036890   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:22:33.060821   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:22:33.060843   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:22:33.069120   12265 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:22:33.069156   12265 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:22:33.070018   12265 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:22:33.070038   12265 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:22:33.073512   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:22:33.073535   12265 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:22:33.137058   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:22:33.137088   12265 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:22:33.226855   12265 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.226884   12265 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:22:33.270492   12265 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:22:33.270513   12265 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:22:33.316169   12265 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.316195   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:22:33.316355   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:22:33.316373   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:22:33.316509   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:22:33.316522   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:22:33.327110   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:22:33.327126   12265 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:22:33.354597   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:22:33.420390   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:22:33.435680   12265 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:22:33.435717   12265 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:22:33.439954   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:22:33.439978   12265 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:22:33.444981   12265 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.445002   12265 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:22:33.522524   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:22:33.536060   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:22:33.536089   12265 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:22:33.569830   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:22:33.590335   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:22:33.590366   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:22:33.601121   12265 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:22:33.601154   12265 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:22:33.623197   12265 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.623219   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:22:33.629904   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:22:33.693404   12265 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.693424   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:22:33.747802   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:33.761431   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:22:33.761461   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:22:33.774811   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:22:33.774845   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:22:33.825893   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:22:33.895859   12265 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:22:33.895893   12265 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:22:34.018321   12265 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:22:34.018345   12265 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:22:34.260751   12265 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:22:34.260776   12265 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:22:34.288705   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:22:34.288733   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:22:34.575904   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:22:34.575932   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:22:34.578707   12265 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:34.578728   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:22:34.872174   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:35.002110   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:22:35.002133   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:22:35.053333   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47211504s)
	I0916 10:22:35.173178   12265 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:22:35.173148   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.243289168s)
	I0916 10:22:35.173338   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173355   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.173706   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:35.173723   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.173737   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.173751   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.173762   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.174037   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.174053   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.219712   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:35.219745   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:35.220033   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:35.220084   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:35.326225   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:22:35.326245   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:22:35.667079   12265 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:35.667102   12265 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:22:35.677467   12265 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-001438" context rescaled to 1 replicas
	I0916 10:22:36.005922   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:22:36.880549   12265 pod_ready.go:103] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"False"
	I0916 10:22:37.248962   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.296492058s)
	I0916 10:22:37.249022   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249036   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.306004364s)
	I0916 10:22:37.249050   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.257675255s)
	I0916 10:22:37.249138   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249160   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249084   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249221   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249330   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249355   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249374   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249434   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249460   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249476   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249440   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249499   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.249529   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249541   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249485   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:37.249593   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:37.249655   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.249676   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251028   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251216   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251214   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:37.251232   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:37.251278   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:37.251288   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:38.978538   12265 pod_ready.go:93] pod "etcd-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:38.978561   12265 pod_ready.go:82] duration metric: took 6.115904528s for pod "etcd-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:38.978572   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179661   12265 pod_ready.go:93] pod "kube-apiserver-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.179691   12265 pod_ready.go:82] duration metric: took 201.112317ms for pod "kube-apiserver-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.179705   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377607   12265 pod_ready.go:93] pod "kube-controller-manager-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.377640   12265 pod_ready.go:82] duration metric: took 197.926831ms for pod "kube-controller-manager-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.377656   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509747   12265 pod_ready.go:93] pod "kube-proxy-66flj" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.509775   12265 pod_ready.go:82] duration metric: took 132.110984ms for pod "kube-proxy-66flj" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.509789   12265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633441   12265 pod_ready.go:93] pod "kube-scheduler-addons-001438" in "kube-system" namespace has status "Ready":"True"
	I0916 10:22:39.633475   12265 pod_ready.go:82] duration metric: took 123.676997ms for pod "kube-scheduler-addons-001438" in "kube-system" namespace to be "Ready" ...
	I0916 10:22:39.633487   12265 pod_ready.go:39] duration metric: took 6.810577473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:22:39.633508   12265 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:22:39.633572   12265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:22:39.633966   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:22:39.634003   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:39.637511   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638022   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:39.638050   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:39.638265   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:39.638449   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:39.638594   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:39.638741   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:40.248183   12265 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:22:40.342621   12265 addons.go:234] Setting addon gcp-auth=true in "addons-001438"
	I0916 10:22:40.342682   12265 host.go:66] Checking if "addons-001438" exists ...
	I0916 10:22:40.343054   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.343105   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.358807   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0916 10:22:40.359276   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.359793   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.359818   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.360152   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.360750   12265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:22:40.360794   12265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:22:40.375531   12265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0916 10:22:40.375999   12265 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:22:40.376410   12265 main.go:141] libmachine: Using API Version  1
	I0916 10:22:40.376431   12265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:22:40.376712   12265 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:22:40.376880   12265 main.go:141] libmachine: (addons-001438) Calling .GetState
	I0916 10:22:40.378466   12265 main.go:141] libmachine: (addons-001438) Calling .DriverName
	I0916 10:22:40.378706   12265 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:22:40.378736   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHHostname
	I0916 10:22:40.381488   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.381978   12265 main.go:141] libmachine: (addons-001438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:55:19", ip: ""} in network mk-addons-001438: {Iface:virbr1 ExpiryTime:2024-09-16 11:21:58 +0000 UTC Type:0 Mac:52:54:00:9c:55:19 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-001438 Clientid:01:52:54:00:9c:55:19}
	I0916 10:22:40.381997   12265 main.go:141] libmachine: (addons-001438) DBG | domain addons-001438 has defined IP address 192.168.39.72 and MAC address 52:54:00:9c:55:19 in network mk-addons-001438
	I0916 10:22:40.382162   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHPort
	I0916 10:22:40.382374   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHKeyPath
	I0916 10:22:40.382527   12265 main.go:141] libmachine: (addons-001438) Calling .GetSSHUsername
	I0916 10:22:40.382728   12265 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/addons-001438/id_rsa Username:docker}
	I0916 10:22:41.185716   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.148787276s)
	I0916 10:22:41.185775   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185787   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185792   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.831162948s)
	I0916 10:22:41.185821   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185842   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185899   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.76548291s)
	I0916 10:22:41.185927   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185929   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.663383888s)
	I0916 10:22:41.185940   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.185948   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.185957   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186031   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616165984s)
	I0916 10:22:41.186072   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186084   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186162   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.55623363s)
	I0916 10:22:41.186179   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186188   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186223   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186233   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186246   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186249   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186272   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186279   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186259   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186321   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.438489786s)
	W0916 10:22:41.186349   12265 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186370   12265 retry.go:31] will retry after 282.502814ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:22:41.186323   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186452   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.360528333s)
	I0916 10:22:41.186474   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186483   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186530   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186552   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186580   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186592   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.133220852s)
	I0916 10:22:41.186602   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186608   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186609   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186627   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186684   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186691   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186698   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186704   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186797   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186826   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186833   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186851   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.186871   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186884   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186893   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186901   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.186907   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.186936   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186943   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.186990   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.186999   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187006   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187013   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.187860   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.187892   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.187899   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.187906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.187912   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.188173   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.188191   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188200   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188204   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188209   12265 addons.go:475] Verifying addon metrics-server=true in "addons-001438"
	I0916 10:22:41.188211   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188241   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.188250   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.188259   12265 addons.go:475] Verifying addon ingress=true in "addons-001438"
	I0916 10:22:41.190004   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190036   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190042   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190099   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190137   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190141   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190152   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190155   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190159   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.190162   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.190167   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.190170   12265 addons.go:475] Verifying addon registry=true in "addons-001438"
	I0916 10:22:41.190534   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:41.190570   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.190579   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.191944   12265 out.go:177] * Verifying registry addon...
	I0916 10:22:41.191953   12265 out.go:177] * Verifying ingress addon...
	I0916 10:22:41.192858   12265 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-001438 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:22:41.193752   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:22:41.245022   12265 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:22:41.245042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:41.245048   12265 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:22:41.245062   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.270906   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:41.270924   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:41.271190   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:41.271210   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:41.469044   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:22:41.699366   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:41.699576   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.200823   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.201220   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:42.707853   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:42.708238   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.062276   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.056308906s)
	I0916 10:22:43.062328   12265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.428733709s)
	I0916 10:22:43.062359   12265 api_server.go:72] duration metric: took 10.72580389s to wait for apiserver process to appear ...
	I0916 10:22:43.062372   12265 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:22:43.062397   12265 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0916 10:22:43.062411   12265 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.683683571s)
	I0916 10:22:43.062334   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062455   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.062799   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:43.062819   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.062830   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.062838   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:43.062846   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:43.063060   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:43.063085   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:43.063094   12265 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-001438"
	I0916 10:22:43.064955   12265 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:22:43.065015   12265 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:22:43.066605   12265 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:22:43.067509   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:22:43.067847   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:22:43.067859   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:22:43.093271   12265 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0916 10:22:43.093820   12265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:22:43.093839   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.095011   12265 api_server.go:141] control plane version: v1.31.1
	I0916 10:22:43.095033   12265 api_server.go:131] duration metric: took 32.653755ms to wait for apiserver health ...
	I0916 10:22:43.095043   12265 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:22:43.123828   12265 system_pods.go:59] 19 kube-system pods found
	I0916 10:22:43.123858   12265 system_pods.go:61] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.123864   12265 system_pods.go:61] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.123871   12265 system_pods.go:61] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.123876   12265 system_pods.go:61] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.123883   12265 system_pods.go:61] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.123886   12265 system_pods.go:61] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.123903   12265 system_pods.go:61] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.123906   12265 system_pods.go:61] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.123913   12265 system_pods.go:61] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.123917   12265 system_pods.go:61] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.123923   12265 system_pods.go:61] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.123928   12265 system_pods.go:61] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.123935   12265 system_pods.go:61] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.123943   12265 system_pods.go:61] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.123948   12265 system_pods.go:61] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.123955   12265 system_pods.go:61] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123960   12265 system_pods.go:61] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.123967   12265 system_pods.go:61] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.123972   12265 system_pods.go:61] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.123980   12265 system_pods.go:74] duration metric: took 28.931422ms to wait for pod list to return data ...
	I0916 10:22:43.123988   12265 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:22:43.137057   12265 default_sa.go:45] found service account: "default"
	I0916 10:22:43.137084   12265 default_sa.go:55] duration metric: took 13.088793ms for default service account to be created ...
	I0916 10:22:43.137095   12265 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:22:43.166020   12265 system_pods.go:86] 19 kube-system pods found
	I0916 10:22:43.166054   12265 system_pods.go:89] "coredns-7c65d6cfc9-j5ndn" [207f35d6-991e-4f00-8881-a877648e3c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:22:43.166063   12265 system_pods.go:89] "coredns-7c65d6cfc9-pzm59" [f910982f-9f91-4da6-ba1d-d7eb1a992baa] Running
	I0916 10:22:43.166075   12265 system_pods.go:89] "csi-hostpath-attacher-0" [15e8a432-87ee-461f-96ce-576b2587d960] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:22:43.166088   12265 system_pods.go:89] "csi-hostpath-resizer-0" [db26d555-4e0f-4738-bd80-a27dc57d7534] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:22:43.166100   12265 system_pods.go:89] "csi-hostpathplugin-xgk62" [dd216434-c2ed-4884-92ea-f80bec8e2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:22:43.166108   12265 system_pods.go:89] "etcd-addons-001438" [5c7e7021-4329-43f8-90cc-196afcb3b9f5] Running
	I0916 10:22:43.166118   12265 system_pods.go:89] "kube-apiserver-addons-001438" [b8c3f368-41ad-4840-aa92-014d25030925] Running
	I0916 10:22:43.166126   12265 system_pods.go:89] "kube-controller-manager-addons-001438" [9606f8aa-be05-4d1e-b5c9-9e625663d5de] Running
	I0916 10:22:43.166136   12265 system_pods.go:89] "kube-ingress-dns-minikube" [10ccbaa1-333f-4586-a1d5-dc73421e2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:22:43.166145   12265 system_pods.go:89] "kube-proxy-66flj" [56e16daa-1626-4b83-a183-7b9ad90ea2d6] Running
	I0916 10:22:43.166154   12265 system_pods.go:89] "kube-scheduler-addons-001438" [a9909fcc-06cd-4e4e-b6be-d0a54a31df94] Running
	I0916 10:22:43.166164   12265 system_pods.go:89] "metrics-server-84c5f94fbc-9hj9f" [76382ab7-9b7a-4bd6-b19c-7a77ba051f1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:22:43.166171   12265 system_pods.go:89] "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:22:43.166178   12265 system_pods.go:89] "registry-66c9cd494c-jq22w" [04e85c00-e6fb-4eee-96aa-273a4f6f273f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:22:43.166183   12265 system_pods.go:89] "registry-proxy-kk7lc" [2f0e1170-c654-4939-91ca-cd5b2bf6ae2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:22:43.166199   12265 system_pods.go:89] "snapshot-controller-56fcc65765-8nq94" [7b65ff07-8e47-4c5a-883c-f6470e930f61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166207   12265 system_pods.go:89] "snapshot-controller-56fcc65765-pv2sr" [85f5bbdb-96af-4f7d-aef3-644db7166242] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:22:43.166217   12265 system_pods.go:89] "storage-provisioner" [c435c6db-b60d-4298-9687-bb885202e358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:22:43.166224   12265 system_pods.go:89] "tiller-deploy-b48cc5f79-b76fb" [a96b112c-4171-4416-9e14-ac1f69fd033e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:22:43.166231   12265 system_pods.go:126] duration metric: took 29.130167ms to wait for k8s-apps to be running ...
	I0916 10:22:43.166241   12265 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:22:43.166284   12265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:22:43.202957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.204822   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:43.205240   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:22:43.205259   12265 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:22:43.339484   12265 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.339511   12265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:22:43.533725   12265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:22:43.574829   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:43.701096   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:43.702516   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.074326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.199962   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.201086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:44.420432   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.951340242s)
	I0916 10:22:44.420484   12265 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.25416987s)
	I0916 10:22:44.420496   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.420512   12265 system_svc.go:56] duration metric: took 1.254267923s WaitForService to wait for kubelet
	I0916 10:22:44.420530   12265 kubeadm.go:582] duration metric: took 12.083973387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:22:44.420555   12265 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:22:44.420516   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.420960   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.420998   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421011   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.421019   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:44.421041   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:44.421242   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:44.421289   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:44.421306   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:44.432407   12265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:22:44.432433   12265 node_conditions.go:123] node cpu capacity is 2
	I0916 10:22:44.432443   12265 node_conditions.go:105] duration metric: took 11.883273ms to run NodePressure ...
	I0916 10:22:44.432454   12265 start.go:241] waiting for startup goroutines ...
	I0916 10:22:44.573423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:44.701968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:44.702167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.087788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.175284   12265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.64151941s)
	I0916 10:22:45.175340   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175356   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175638   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175658   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175667   12265 main.go:141] libmachine: Making call to close driver server
	I0916 10:22:45.175675   12265 main.go:141] libmachine: (addons-001438) Calling .Close
	I0916 10:22:45.175907   12265 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:22:45.175959   12265 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:22:45.175966   12265 main.go:141] libmachine: (addons-001438) DBG | Closing plugin on server side
	I0916 10:22:45.176874   12265 addons.go:475] Verifying addon gcp-auth=true in "addons-001438"
	I0916 10:22:45.179151   12265 out.go:177] * Verifying gcp-auth addon...
	I0916 10:22:45.181042   12265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:22:45.204765   12265 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:22:45.204788   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.240576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.244884   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:45.572763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:45.684678   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:45.699294   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:45.700332   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.071926   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.184345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.198555   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:46.198584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.572691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:46.686213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:46.698404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:46.699290   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.073014   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.184892   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.199176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.199412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:47.573319   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:47.685117   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:47.698854   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:47.699042   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.080702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.186042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.198652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:48.198985   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.572136   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:48.684922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:48.698643   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:48.698805   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.072263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.186126   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.198845   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:49.201291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.571909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:49.686134   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:49.699485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:49.699837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.072013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.185475   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.198803   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:50.198988   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.572410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:50.684716   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:50.698643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:50.698842   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.072735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.185327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.198402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.198563   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:51.574099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:51.684301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:51.698582   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:51.699135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.073280   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.184410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.197628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.197951   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:52.573111   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:52.685463   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:52.698350   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:52.698445   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.073318   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.185032   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.198371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.198982   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:53.572652   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:53.684593   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:53.698434   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:53.699099   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.071466   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.184978   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.199125   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:54.199475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:54.684904   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:54.699578   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:54.700868   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.072026   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.186696   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.199421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.200454   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:55.811368   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:55.811883   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:55.811882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:55.812044   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.073000   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.197552   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:56.571945   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:56.684725   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:56.698164   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:56.698871   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.078099   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.187093   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.198266   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.198788   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:57.572608   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:57.685182   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:57.698112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:57.698451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.072438   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.184226   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.197871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:58.199176   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.573655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:58.688012   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:58.698890   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:58.699498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.072908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.197825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.198094   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:22:59.572578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:22:59.685886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:22:59.699165   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:22:59.699539   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.072677   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.185334   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.198436   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.199279   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:00.572620   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:00.684676   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:00.698184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:00.698937   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.368315   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.368647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:01.368662   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.369057   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.577610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:01.685792   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:01.699073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:01.700679   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.073297   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.184780   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.198423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:02.198632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.573860   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:02.688317   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:02.699137   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:02.699189   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.073268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.185286   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.197706   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:03.199446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.575016   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:03.688681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:03.697852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:03.699284   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.072561   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.184550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.198183   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.198692   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:04.573058   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:04.684410   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:04.698448   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:04.699101   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.073082   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.198422   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.199510   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:05.572901   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:05.685013   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:05.698419   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:05.699052   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.072680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.184899   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.199400   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:06.199960   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.573550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:06.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:06.698176   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:06.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.386744   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.389015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:07.389529   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.391740   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.572440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:07.685517   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:07.699276   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:07.699495   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.073598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.185305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.198307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:08.198701   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.572936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:08.685042   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:08.697898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:08.699045   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.073524   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.185170   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.197444   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.198282   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:09.571947   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:09.685269   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:09.700263   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:09.700289   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.072367   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.184140   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.198279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.198501   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:10.571995   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:10.684443   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:10.698621   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:10.699212   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.072647   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.184997   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.198336   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.199743   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:11.572138   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:11.684642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:11.697735   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:11.698012   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.072087   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.184730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.198825   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.199117   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:12.574471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:12.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:12.697610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:12.697875   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.074276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.200283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:13.200511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:13.572643   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:13.687229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:13.700375   12265 kapi.go:107] duration metric: took 32.506622173s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:13.700476   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.073345   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.185359   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.197920   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:14.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:14.714386   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:14.714848   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.072480   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.184006   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.198907   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:15.571536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:15.686990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:15.698314   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.072850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.397705   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.398059   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:16.571699   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:16.687893   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:16.701822   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.073078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.185433   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.202339   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:17.572915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:17.684909   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:17.698215   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.071875   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.185548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.198104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:18.572180   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:18.684990   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:18.698912   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.072105   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.184341   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.197977   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:19.571740   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:19.685205   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:19.698214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.071811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.184927   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.198225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:20.572184   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:20.684471   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:20.697550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.072526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.185439   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.198086   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:21.573843   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:21.684530   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:21.699027   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.071583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.185751   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.201330   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:22.574078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:22.688728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:22.700516   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.072848   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.184719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:23.571975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:23.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:23.697845   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.071885   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.199755   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.209742   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:24.572903   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:24.684095   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:24.697255   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.072405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.185096   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.197451   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:25.572250   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:25.685603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:25.699421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.072277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.184610   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.197948   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:26.572954   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:26.684305   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:26.698018   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.072121   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.186632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.198260   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:27.571710   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:27.685260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:27.697569   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.072712   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.185404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.197839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:28.572506   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:28.685719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:28.698390   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.073440   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.185211   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.198135   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:29.572871   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:29.684795   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:29.698442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.074307   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.184391   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.198195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:30.571684   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:30.686595   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:30.697829   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.072882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.184355   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.197913   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:31.572796   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:31.685340   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:31.697838   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.072358   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.185072   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.198841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:32.572260   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:32.685619   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:32.697923   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.072329   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.184923   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.198461   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:33.572531   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:33.684886   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:33.698221   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.071922   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.184896   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.198347   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:34.572508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:34.685674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:34.698172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.072040   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.184401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.198192   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:35.571685   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.684934   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:35.699442   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.072917   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.184575   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.197989   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:36.572782   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.685224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:36.697515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.073347   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.184633   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.198109   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:37.572239   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.684842   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:37.698412   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.072639   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.184377   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.197723   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:38.572964   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.684944   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:38.698216   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.071865   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.184322   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.197583   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:39.572728   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.685221   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:39.697663   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.073346   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.184763   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.198338   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:40.572748   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.688546   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:40.698337   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.072528   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.184742   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.197991   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:41.572832   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.685275   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:41.697957   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.072948   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.185237   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.198222   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:42.572150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.685770   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:42.698107   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.072508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.184255   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.198122   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:43.571791   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.685476   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:43.698021   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.072455   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.198450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:44.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.685519   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:44.698088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.073394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.184852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.198932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:45.572905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.685024   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:45.699000   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.072804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.185568   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.198040   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:46.571961   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.684879   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:46.698104   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.071779   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.184794   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.198431   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:47.572786   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.685048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:47.701841   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.072550   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.184915   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.198725   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:48.572850   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.684405   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:48.697953   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.075719   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.185584   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.198034   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:49.572642   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.685074   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:49.697421   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.072216   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.184736   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.198614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:50.572675   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.685508   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:50.697632   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.072878   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.185267   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.197508   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:51.572653   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.684680   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:51.698038   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.072225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.184256   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.197802   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:52.572573   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.685760   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:52.699050   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.072698   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.185139   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.197417   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:53.572526   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.684976   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:53.698186   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.071987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.184373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.197898   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:54.573326   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.685154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:54.699596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.071975   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.184301   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.197532   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:55.573068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.684535   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:55.698262   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.071830   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.185558   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.198149   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:56.571905   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.684135   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:56.697614   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.109030   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.216004   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.216775   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:57.572732   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.684811   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:57.697899   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.071691   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.184970   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.198291   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:58.572185   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.685478   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:58.698240   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.072727   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.185578   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.207485   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:23:59.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.684402   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:23:59.698565   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.072447   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.192764   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.206954   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:00.573224   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.685091   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:00.697997   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.071906   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.184428   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.197550   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:01.572498   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.685525   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:01.702647   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.072504   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.185219   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.197512   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:02.573858   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.685938   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:02.699556   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.080160   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.188056   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.197615   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:03.575213   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.685187   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:03.697887   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.072585   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.185321   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.577876   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.685259   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:04.698763   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.073356   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.184332   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.197676   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.574632   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.705119   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:05.705797   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.073702   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.190460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.199492   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.573521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.685468   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:06.697671   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.074427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.211989   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.214167   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.573479   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.684919   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:07.698441   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.184827   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.573401   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.685277   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:08.698457   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.072421   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.184959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.198365   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.572446   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.685036   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:09.697443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.072489   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.185143   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.197711   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.572704   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.685206   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:10.697839   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.073656   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.185083   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.197443   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.572739   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.685343   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:11.697853   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.072697   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.185630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.197928   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.572344   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.684814   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:12.698225   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.073324   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.185254   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.198404   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.571987   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.684709   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:13.698073   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.072174   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.184688   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.198078   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.571798   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.685576   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:14.698188   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.072810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.184683   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.198053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.574408   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.684741   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:15.698415   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.072047   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.185423   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.198010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.572968   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.684217   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:16.697876   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.073276   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.185372   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.197865   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.572327   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.684929   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:17.698146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.073068   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.185261   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.197596   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.684379   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:18.697450   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.072646   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.184810   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.198157   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.572098   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.684635   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:19.698108   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.073055   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.185325   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.197893   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.572951   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.684268   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:20.697542   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.073300   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.184458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.198058   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.571882   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.684389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.698491   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.072769   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.185150   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.198444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.572557   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.686730   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.697987   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.072389   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.184902   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.198815   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.572090   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.684279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.072655   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.185118   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.197515   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.573029   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.684503   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.697942   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.073161   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.185394   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.197824   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.572789   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.685536   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.698429   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.072248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.184713   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.198206   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.572681   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.685404   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.697732   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.073033   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.186532   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.197932   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.573166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.684900   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.698494   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.072840   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.185112   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.199554   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.573533   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.685513   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.698631   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.073563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.184668   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.198960   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.573373   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.684371   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.698226   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.072380   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.184889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.198132   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.572427   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.685015   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.699053   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.073225   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.185241   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.198172   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.572019   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.685328   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.697511   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.072382   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.185154   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.198590   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.572333   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.688804   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.699195   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.072971   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.184395   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.197840   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.572457   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.684935   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.698247   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.072201   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.184817   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.198299   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.572603   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.684807   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.698764   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.079460   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.184783   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.198219   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.572155   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.684462   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.698249   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.071889   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.185035   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.198639   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.572607   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.684993   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.698317   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.073167   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.187630   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.197861   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.572959   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.684449   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.698084   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.072598   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.184553   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.198241   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.572543   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.685061   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.698066   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.072888   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.184279   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.198475   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.572908   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.684166   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.699214   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.071396   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.185054   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.197274   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.571831   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.683617   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.698304   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.073753   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.184818   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.198303   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.572754   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.685078   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.697801   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.074144   12265 kapi.go:107] duration metric: took 1m59.00663205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:42.185287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.197975   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.685826   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.698484   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.185521   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.197894   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.684695   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.698444   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.184270   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.198072   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.686127   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.697760   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.184583   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.197892   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.685284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.698273   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.184284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.197597   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.684852   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.698234   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.185674   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.197778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.684803   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.698286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.185195   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.197536   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.684936   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.698202   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.185940   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.198354   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.685628   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.698017   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.184172   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.197513   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.684563   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.699121   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.185458   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.197627   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.684548   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.697728   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.184587   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.198088   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.687284   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.697762   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.185441   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.197777   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.684856   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.698392   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.184966   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.198309   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.685059   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.697818   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.184799   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.199146   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.685287   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.697823   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.184982   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.198778   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.684629   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.698010   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.185306   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.197805   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.686354   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.697789   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.184048   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.198685   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.685283   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:58.697967   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.185357   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.198462   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.685857   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:59.698582   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.185027   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.199070   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.685248   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:00.697584   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.444242   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.542180   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.684941   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:01.698345   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.184494   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.199673   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.686844   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:02.701197   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.186108   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.200286   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.935418   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:03.936940   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.185837   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.198343   12265 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.685229   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:04.697687   12265 kapi.go:107] duration metric: took 2m23.503933898s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:05.184162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:05.686162   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.184784   12265 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:25:06.685596   12265 kapi.go:107] duration metric: took 2m21.504550895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:25:06.687290   12265 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-001438 cluster.
	I0916 10:25:06.688726   12265 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:25:06.689940   12265 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:25:06.691195   12265 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:25:06.692654   12265 addons.go:510] duration metric: took 2m34.356008246s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner metrics-server inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:25:06.692692   12265 start.go:246] waiting for cluster config update ...
	I0916 10:25:06.692714   12265 start.go:255] writing updated cluster config ...
	I0916 10:25:06.692960   12265 ssh_runner.go:195] Run: rm -f paused
	I0916 10:25:06.701459   12265 out.go:177] * Done! kubectl is now configured to use "addons-001438" cluster and "default" namespace by default
	E0916 10:25:06.702711   12265 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.901413763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482429901328576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32d191c6-a244-45b8-a4d9-76f2ff48ae90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.902141571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8600d4f4-bb4b-4f07-9157-989b8f01c7d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.902216096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8600d4f4-bb4b-4f07-9157-989b8f01c7d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.902750222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8600d4f4-bb4b-4f07-9157-989b8f01c7d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.939200339Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=805e8163-9d34-41b9-b08b-33d4054e7f6d name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.939295812Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=805e8163-9d34-41b9-b08b-33d4054e7f6d name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.940622113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4acf175b-2438-43ac-b733-11acebafab62 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.941637308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482429941609935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4acf175b-2438-43ac-b733-11acebafab62 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.942102601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a170b28e-5f3c-4769-bd30-292b87856a99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.942181373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a170b28e-5f3c-4769-bd30-292b87856a99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.942740088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a170b28e-5f3c-4769-bd30-292b87856a99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.983234149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8eddecef-e44e-45b4-ae68-39e0393803c0 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.983309610Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8eddecef-e44e-45b4-ae68-39e0393803c0 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.985044650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a2b0b7a-e503-44f6-b326-238fa1ab2af8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.986675775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482429986646693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a2b0b7a-e503-44f6-b326-238fa1ab2af8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.987141342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e889241b-865e-485d-aa1d-03c520b38ee2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.987197998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e889241b-865e-485d-aa1d-03c520b38ee2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:09 addons-001438 crio[662]: time="2024-09-16 10:27:09.987763915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e889241b-865e-485d-aa1d-03c520b38ee2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.021790396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0397d04b-9d2f-49c0-b862-df64ea0caf19 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.021865097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0397d04b-9d2f-49c0-b862-df64ea0caf19 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.023077701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95f82477-3d79-406a-95a3-a8e39804332d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.024544693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482430024517747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95f82477-3d79-406a-95a3-a8e39804332d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.025157767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5e3f65a-5d6d-44c6-a08c-a95fe0035686 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.025246406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5e3f65a-5d6d-44c6-a08c-a95fe0035686 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:27:10 addons-001438 crio[662]: time="2024-09-16 10:27:10.026612976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7,PodSandboxId:81638f0641649ec0787ef703694bd258aa844caa3af58541d00d9715f3de0d35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726482306176330528,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-jg5wz,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: dbfd65d1-83cc-4b88-b475-c822c7d77c41,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9f00ee52087036fff363066f2b9b7bd54f75155cc73bd306533e31b8e233b0,PodSandboxId:f0a70a6b5b4fa0e23e9c8885ae050483438fd567740a20c3b0d6ee2a4c749755,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726482304086493301,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-jhd4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59d22048-5f6a-4373-a1c6-05
c111450f4a,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4ff4f2e6c350478e5755d91b20d919662c0ffa06fb39e2bd3bf8bb0f703a1d2,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726482280953449774,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa45fa1d889cdca320c55e35a415c7211e36ba0837572343c71988deaa5cd3ca,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726482248718684378,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:112e37da6f1b0a97970a925ea58a6c98930b8e2763205c618d9b497089866b3f,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726482247152202557,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd9404de3e14cb23faa22a3b3e25edf2e86172ce56a3bd91b341e6072351d08,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726482246122565989,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26165c7625a62b7701736de7efb40460b0b4376f8e96e3bb63d744179813ade0,PodSandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726482244231428730,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e24c1abefe708668b04ab175e1ce63dcba533760564485612dc5c532078e4c,PodSandboxId:bf02d50932f1
47d4c08135b62337a685daad58e7ee619fe769c71cf464997dc9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726482242446910373,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db26d555-4e0f-4738-bd80-a27dc57d7534,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5edaf3e2dd3d6ea0a0beb93da0237f66936603e62e6799eb9d40f4bfced4fdc,Pod
SandboxId:2a34d4424d09e2196b2fd0aa88e581854eeee612b59affcf31f432734c9c4d8f,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726482240462952497,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xgk62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd216434-c2ed-4884-92ea-f80bec8e2fcc,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:b8ebd2f050729c02f7f87eeb785883cafeb3d9537560352ff7a58f50b5630b1c,PodSandboxId:f375334740e2f500ee463bae436969a7464e2ab3cd1aa52a70e3a8f54f2ea408,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726482238605647796,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15e8a432-87ee-461f-96ce-576b2587d960,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d52d2269e100287dd000b3139d6f09e70acad7cbe081d5ba345ffadc774eb64,PodSandboxId:6fe91ac2288fee7b74e48dab9ebacc7e6f4a0864d1442788fa1bb356cd32d99c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482237161655062,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rls9n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 239fb0bb-898b-4d40-a29f-b0f8f4f52620,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:54c4347a1fc2bc58f92b11d3bd442850d2707411d3f5b974124a86f641439b9c,PodSandboxId:d66b1317412a725710e3a74d876e0614ef523665d5d35e69bed8cd20d3e4c267,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726482236992762987,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dk6l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 117d6d47-b187-4394-8213-f294b01f8f2d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bde3324c47da870cb9ea5d5e391100b947a5f5628087ca822f0b968d5d7d10,PodSandboxId:0eef20d1c681337d85c6cbf7dbb64029a954a6306f78cde4d651b31122cd7f87,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204191942200,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-pv2sr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85f5bbdb-96af-4f7d-aef3-644db7166242,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f786c20ceffe35012b49cdfbc150466ebf04c05bab8d9736cd3977b8a655c898,PodSandboxId:ec33782f42717a00b375787866eb14bf4c1a21698df7f4343ab08f7eaff4773a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726482204058066688,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8nq94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b65ff07-8e47-4c5a-883c-f6470e930f61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d997d75b48ee4660710828d3ee8c19e7e3e7c5b64f4d69cba04ce949ed38433e,PodSandboxId:173b48ab2ab7ffb543810df5a983063619ab4a5c57ad2582cb89cca04eabcc03,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726482196486082416,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rj67m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18c201dd-1709-499b-83f1-7f76075b6c19,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0024bbca27aaca342e5d8f82c4da128027c7976c1e62ed95ae9cf694c9f92bba,PodSandboxId:8bcb0a4a20a5a867b58a2f64346327b7ee4bbc21423c8ef47418372635518a90,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726482194268567519,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9hj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 76382ab7-9b7a-4bd6-b19c-7a77ba051f1d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e13f898473193beaaa81c09bb22096af279dabe70c03270874a90b0b9cc83f62,PodSandboxId:c90a44c7edea8c5d35e974be23b2851515f7b830d58597d0ada22367c338e1ab,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1726482187766689704,Labels:map[string]string{io.kubernetes.contai
ner.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-58ll2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505d8619-5fc1-4247-af75-f797558c3d45,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8193aad1beb5ba639149913d04736ec496c655637790c2e682d4920170661edc,PodSandboxId:f1a3772ce5f7d59b92df6e29b5a34b5ae5a2567c67f359cff00143e415177028,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726482169760510821,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10ccbaa1-333f-4586-a1d5-dc73421e2bd1,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e,PodSandboxId:748d363148f6690bdd4347da94f06b750dcd9dfc4c9051ba1fd2b1168589dce8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e3
8f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482159684718183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c435c6db-b60d-4298-9687-bb885202e358,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce,PodSandboxId:42b8586a7b29a8b8056b5615562d2ec92c1cdfbb2b00da19cfca2dc059f9128a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e95
6d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482158120421871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j5ndn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207f35d6-991e-4f00-8881-a877648e3c38,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6
f6ae889b5377c,PodSandboxId:2bf9dc368debd2d2b877480619135ca9093c981fce759082ec24f6592fb1de92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482154187294521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66flj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e16daa-1626-4b83-a183-7b9ad90ea2d6,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84,PodSandboxId:f7aeaa
11c7f4c7baee6c9befcf5929af3394ee8b7430c47ca172d58476ae75a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482142846546597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5914cb158890ae06c5bfa7a9a1647e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3,PodSandboxId:8a68216be6dee09413e6f31b3a7e5d5ca7545ffd899368560ace1
f4086222cc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482142845031916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b0151eec9d570509290399a84fe5e0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237,PodSandboxId:ec134844260ab148c1de608f261069c5f8c5a6b0d90
9d7bbb7bff552dad3e112,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482142832730800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c370391c8ab866a62dac47d3033ef8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77,PodSandboxId:81f095a38dae111be7d7787f448562b27561a6d73c96775e60b52e2cb77e
1d9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482142844185577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-001438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bdc5cbd50bfb60b2aad0cb9a55828e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5e3f65a-5d6d-44c6-a08c-a95fe0035686 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c0c62d19fc341       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 2 minutes ago       Running             gcp-auth                                 0                   81638f0641649       gcp-auth-89d5ffd79-jg5wz
	4d9f00ee52087       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             2 minutes ago       Running             controller                               0                   f0a70a6b5b4fa       ingress-nginx-controller-bc57996ff-jhd4w
	a4ff4f2e6c350       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago       Running             csi-snapshotter                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	fa45fa1d889cd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	112e37da6f1b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	bcd9404de3e14       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	26165c7625a62       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	35e24c1abefe7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   bf02d50932f14       csi-hostpath-resizer-0
	a5edaf3e2dd3d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   2a34d4424d09e       csi-hostpathplugin-xgk62
	b8ebd2f050729       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   f375334740e2f       csi-hostpath-attacher-0
	0d52d2269e100       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             3 minutes ago       Exited              patch                                    1                   6fe91ac2288fe       ingress-nginx-admission-patch-rls9n
	54c4347a1fc2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   3 minutes ago       Exited              create                                   0                   d66b1317412a7       ingress-nginx-admission-create-dk6l8
	f0bde3324c47d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   0eef20d1c6813       snapshot-controller-56fcc65765-pv2sr
	f786c20ceffe3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   ec33782f42717       snapshot-controller-56fcc65765-8nq94
	d997d75b48ee4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   173b48ab2ab7f       local-path-provisioner-86d989889c-rj67m
	0024bbca27aac       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        3 minutes ago       Running             metrics-server                           0                   8bcb0a4a20a5a       metrics-server-84c5f94fbc-9hj9f
	e13f898473193       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               4 minutes ago       Running             cloud-spanner-emulator                   0                   c90a44c7edea8       cloud-spanner-emulator-769b77f747-58ll2
	8193aad1beb5b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             4 minutes ago       Running             minikube-ingress-dns                     0                   f1a3772ce5f7d       kube-ingress-dns-minikube
	20d2f3360f320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   748d363148f66       storage-provisioner
	63d270cbed8d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             4 minutes ago       Running             coredns                                  0                   42b8586a7b29a       coredns-7c65d6cfc9-j5ndn
	60269ac0552c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             4 minutes ago       Running             kube-proxy                               0                   2bf9dc368debd       kube-proxy-66flj
	1aabe5cb48f97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             4 minutes ago       Running             etcd                                     0                   f7aeaa11c7f4c       etcd-addons-001438
	2d34a4e3596c2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             4 minutes ago       Running             kube-controller-manager                  0                   8a68216be6dee       kube-controller-manager-addons-001438
	bfff5b2d37985       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             4 minutes ago       Running             kube-apiserver                           0                   81f095a38dae1       kube-apiserver-addons-001438
	5a4816dc33e76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             4 minutes ago       Running             kube-scheduler                           0                   ec134844260ab       kube-scheduler-addons-001438
	
	
	==> coredns [63d270cbed8d9138d7dfb6c94fecf8a928065d08296ec4b210284e9c80e343ce] <==
	[INFO] 127.0.0.1:32820 - 49588 "HINFO IN 5683833228926934535.5808779734602365342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027869673s
	[INFO] 10.244.0.7:47242 - 15842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000350783s
	[INFO] 10.244.0.7:47242 - 29412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155576s
	[INFO] 10.244.0.7:51495 - 23321 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115255s
	[INFO] 10.244.0.7:51495 - 47135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085371s
	[INFO] 10.244.0.7:40689 - 10301 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114089s
	[INFO] 10.244.0.7:40689 - 30779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011843s
	[INFO] 10.244.0.7:53526 - 19539 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127604s
	[INFO] 10.244.0.7:53526 - 34381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109337s
	[INFO] 10.244.0.7:39182 - 43658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075802s
	[INFO] 10.244.0.7:39182 - 55433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031766s
	[INFO] 10.244.0.7:52628 - 35000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037386s
	[INFO] 10.244.0.7:52628 - 44218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027585s
	[INFO] 10.244.0.7:47656 - 61837 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028204s
	[INFO] 10.244.0.7:47656 - 39571 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027731s
	[INFO] 10.244.0.7:53964 - 36235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098663s
	[INFO] 10.244.0.7:53964 - 55690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045022s
	[INFO] 10.244.0.22:49146 - 11336 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000543634s
	[INFO] 10.244.0.22:44900 - 51750 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125434s
	[INFO] 10.244.0.22:47266 - 27362 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158517s
	[INFO] 10.244.0.22:53077 - 63050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068888s
	[INFO] 10.244.0.22:52796 - 34381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101059s
	[INFO] 10.244.0.22:52167 - 15594 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126468s
	[INFO] 10.244.0.22:42107 - 54869 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004149176s
	[INFO] 10.244.0.22:60865 - 20616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006078154s
	
	
	==> describe nodes <==
	Name:               addons-001438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-001438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-001438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_22_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-001438
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-001438"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-001438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:27:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:26:02 +0000   Mon, 16 Sep 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-001438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b69a913a20a4259950d0bf801229c28
	  System UUID:                8b69a913-a20a-4259-950d-0bf801229c28
	  Boot ID:                    7d21de27-dd4e-4002-9fc0-df14a0ff761f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-58ll2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  gcp-auth                    gcp-auth-89d5ffd79-jg5wz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jhd4w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m30s
	  kube-system                 coredns-7c65d6cfc9-j5ndn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m37s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 csi-hostpathplugin-xgk62                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 etcd-addons-001438                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m43s
	  kube-system                 kube-apiserver-addons-001438                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-controller-manager-addons-001438       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-proxy-66flj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-scheduler-addons-001438                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 metrics-server-84c5f94fbc-9hj9f             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m32s
	  kube-system                 snapshot-controller-56fcc65765-8nq94        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-56fcc65765-pv2sr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  local-path-storage          local-path-provisioner-86d989889c-rj67m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jnpkm              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m34s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s  kubelet          Node addons-001438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s  kubelet          Node addons-001438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s  kubelet          Node addons-001438 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m41s  kubelet          Node addons-001438 status is now: NodeReady
	  Normal  RegisteredNode           4m38s  node-controller  Node addons-001438 event: Registered Node addons-001438 in Controller
	
	
	==> dmesg <==
	[  +0.270363] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.002627] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.196359] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +0.061696] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999876] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.091472] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.774952] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +1.497885] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.466780] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.018877] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.254117] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 10:23] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.876932] kauditd_printk_skb: 7 callbacks suppressed
	[ +33.888489] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:24] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.263650] kauditd_printk_skb: 76 callbacks suppressed
	[ +48.109785] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 10:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.297596] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.818881] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.121137] kauditd_printk_skb: 19 callbacks suppressed
	[ +29.616490] kauditd_printk_skb: 37 callbacks suppressed
	[Sep16 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.276540] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:27] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1aabe5cb48f97ae6d5205b827466f0ed18fcf88945331a69a0f69c36fdc69b84] <==
	{"level":"info","ts":"2024-09-16T10:25:01.423722Z","caller":"traceutil/trace.go:171","msg":"trace[1526018823] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"284.258855ms","start":"2024-09-16T10:25:01.139452Z","end":"2024-09-16T10:25:01.423711Z","steps":["trace[1526018823] 'process raft request'  (duration: 284.165558ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.424593Z","caller":"traceutil/trace.go:171","msg":"trace[1620023283] linearizableReadLoop","detail":"{readStateIndex:1296; appliedIndex:1296; }","duration":"253.838283ms","start":"2024-09-16T10:25:01.170745Z","end":"2024-09-16T10:25:01.424583Z","steps":["trace[1620023283] 'read index received'  (duration: 253.835456ms)","trace[1620023283] 'applied index is now lower than readState.Index'  (duration: 2.263µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:01.424681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.948565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.424719Z","caller":"traceutil/trace.go:171","msg":"trace[1658095100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"253.992891ms","start":"2024-09-16T10:25:01.170719Z","end":"2024-09-16T10:25:01.424712Z","steps":["trace[1658095100] 'agreement among raft nodes before linearized reading'  (duration: 253.933158ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:01.430878Z","caller":"traceutil/trace.go:171","msg":"trace[196824448] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"219.615242ms","start":"2024-09-16T10:25:01.211190Z","end":"2024-09-16T10:25:01.430805Z","steps":["trace[196824448] 'process raft request'  (duration: 217.799649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:01.432286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.218738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:01.432549Z","caller":"traceutil/trace.go:171","msg":"trace[1250515915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"248.433899ms","start":"2024-09-16T10:25:01.183901Z","end":"2024-09-16T10:25:01.432335Z","steps":["trace[1250515915] 'agreement among raft nodes before linearized reading'  (duration: 246.789324ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917472Z","caller":"traceutil/trace.go:171","msg":"trace[1132617141] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"256.411132ms","start":"2024-09-16T10:25:03.661047Z","end":"2024-09-16T10:25:03.917458Z","steps":["trace[1132617141] 'read index received'  (duration: 256.216658ms)","trace[1132617141] 'applied index is now lower than readState.Index'  (duration: 193.873µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:03.917646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.564415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917689Z","caller":"traceutil/trace.go:171","msg":"trace[1681803764] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1254; }","duration":"256.635309ms","start":"2024-09-16T10:25:03.661043Z","end":"2024-09-16T10:25:03.917678Z","steps":["trace[1681803764] 'agreement among raft nodes before linearized reading'  (duration: 256.524591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.498369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917721Z","caller":"traceutil/trace.go:171","msg":"trace[320039730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"246.52737ms","start":"2024-09-16T10:25:03.671187Z","end":"2024-09-16T10:25:03.917715Z","steps":["trace[320039730] 'agreement among raft nodes before linearized reading'  (duration: 246.484981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.395252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:03.917834Z","caller":"traceutil/trace.go:171","msg":"trace[699037525] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"461.96825ms","start":"2024-09-16T10:25:03.455860Z","end":"2024-09-16T10:25:03.917828Z","steps":["trace[699037525] 'process raft request'  (duration: 461.454179ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:03.917838Z","caller":"traceutil/trace.go:171","msg":"trace[618256897] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"234.40851ms","start":"2024-09-16T10:25:03.683425Z","end":"2024-09-16T10:25:03.917833Z","steps":["trace[618256897] 'agreement among raft nodes before linearized reading'  (duration: 234.386479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:03.917919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:03.455845Z","time spent":"462.003063ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1251 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.523876Z","caller":"traceutil/trace.go:171","msg":"trace[565706559] transaction","detail":"{read_only:false; response_revision:1399; number_of_response:1; }","duration":"393.956218ms","start":"2024-09-16T10:25:42.129887Z","end":"2024-09-16T10:25:42.523844Z","steps":["trace[565706559] 'process raft request'  (duration: 393.821788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.524080Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.129864Z","time spent":"394.119545ms","remote":"127.0.0.1:51374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1398 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:25:42.533976Z","caller":"traceutil/trace.go:171","msg":"trace[668376333] linearizableReadLoop","detail":"{readStateIndex:1459; appliedIndex:1458; }","duration":"302.69985ms","start":"2024-09-16T10:25:42.231262Z","end":"2024-09-16T10:25:42.533962Z","steps":["trace[668376333] 'read index received'  (duration: 293.491454ms)","trace[668376333] 'applied index is now lower than readState.Index'  (duration: 9.207628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:42.535969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.605451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-09-16T10:25:42.536065Z","caller":"traceutil/trace.go:171","msg":"trace[19888550] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1400; }","duration":"205.726154ms","start":"2024-09-16T10:25:42.330329Z","end":"2024-09-16T10:25:42.536056Z","steps":["trace[19888550] 'agreement among raft nodes before linearized reading'  (duration: 205.527055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.536191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.924785ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:42.536244Z","caller":"traceutil/trace.go:171","msg":"trace[1740705082] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1400; }","duration":"304.971706ms","start":"2024-09-16T10:25:42.231257Z","end":"2024-09-16T10:25:42.536228Z","steps":["trace[1740705082] 'agreement among raft nodes before linearized reading'  (duration: 304.915956ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:42.537030Z","caller":"traceutil/trace.go:171","msg":"trace[778126279] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"337.225123ms","start":"2024-09-16T10:25:42.199749Z","end":"2024-09-16T10:25:42.536974Z","steps":["trace[778126279] 'process raft request'  (duration: 333.931313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.537228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:42.199733Z","time spent":"337.391985ms","remote":"127.0.0.1:51498","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-001438\" mod_revision:1384 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-001438\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-001438\" > >"}
	
	
	==> gcp-auth [c0c62d19fc341b10ebf89fe58a47a68881bae991a531961d93c1579a9a1948e7] <==
	2024/09/16 10:25:06 GCP Auth Webhook started!
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	2024/09/16 10:25:09 Ready to marshal response ...
	2024/09/16 10:25:09 Ready to write response ...
	
	
	==> kernel <==
	 10:27:10 up 5 min,  0 users,  load average: 0.65, 0.88, 0.46
	Linux addons-001438 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bfff5b2d379857237c75eff7ae9c54b1eaa410285bebed27cc82511c761eff77] <==
	I0916 10:22:40.932409       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0916 10:22:42.426039       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.146.100"}
	I0916 10:22:42.456409       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0916 10:22:42.660969       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.102.193"}
	I0916 10:22:44.945009       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.134.141"}
	W0916 10:23:38.948410       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.948711       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 10:23:38.949896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 10:23:38.958493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:23:38.958543       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 10:23:38.959752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0916 10:24:18.395108       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:18.395300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 10:24:18.395442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 10:24:18.398244       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.30.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.30.150:443: connect: connection refused" logger="UnhandledError"
	I0916 10:24:18.453414       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 10:25:09.633337       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.80.80"}
	I0916 10:27:07.962789       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:27:08.990230       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [2d34a4e3596c2106b54d8a84abfb811c307cb2971374422f5c532a60e0cde3a3] <==
	I0916 10:25:06.489287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="42.711µs"
	I0916 10:25:07.863123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="72.138µs"
	I0916 10:25:09.687063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="25.765664ms"
	E0916 10:25:09.687144       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0916 10:25:09.731163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.235103ms"
	I0916 10:25:09.753608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="22.282725ms"
	I0916 10:25:09.753862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="122.927µs"
	I0916 10:25:09.762905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.16µs"
	I0916 10:25:16.878158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="16.26286ms"
	I0916 10:25:16.878254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="50.754µs"
	I0916 10:25:19.390322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.132µs"
	I0916 10:25:32.259505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:25:42.895965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="3.388638ms"
	I0916 10:25:42.934221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="14.56657ms"
	I0916 10:25:42.935951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="80.433µs"
	I0916 10:25:50.249420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="66.204µs"
	I0916 10:25:52.859393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="64.229µs"
	I0916 10:26:00.384466       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0916 10:26:02.877788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-001438"
	I0916 10:26:05.861778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="51.109µs"
	I0916 10:27:00.169838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="5.547µs"
	I0916 10:27:04.861176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="105.111µs"
	E0916 10:27:08.992417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:27:10.141337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:27:10.141432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [60269ac0552c1882216cfc166f51b2cc05e0c33fccab3cf44f6f6ae889b5377c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:22:35.282699       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:22:35.409784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0916 10:22:35.409847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:22:36.135283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:22:36.135476       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:22:36.135545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:22:36.146626       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:22:36.146849       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:22:36.146861       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:22:36.156579       1 config.go:199] "Starting service config controller"
	I0916 10:22:36.156604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:22:36.166809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:22:36.166838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:22:36.168180       1 config.go:328] "Starting node config controller"
	I0916 10:22:36.168189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:22:36.258515       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:22:36.268518       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:22:36.268639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a4816dc33e761b9fdbc5948f09898492ce9a2dc128c281cf5085d61d7f1b237] <==
	W0916 10:22:25.363221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:25.363254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:22:25.363420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:22:25.363573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:22:25.363425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:25.363533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:25.363941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.174422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:22:26.174473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.225213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:22:26.225308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.333904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:22:26.333957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.350221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:22:26.350326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.406843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:22:26.406982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.446248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:22:26.446395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.547116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:22:26.547206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:22:26.704254       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:22:26.704303       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:22:28.953769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:27:08 addons-001438 kubelet[1200]: E0916 10:27:08.158094    1200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482428157268845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:08 addons-001438 kubelet[1200]: E0916 10:27:08.158140    1200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482428157268845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:458879,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194534    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-bpffs\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194595    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-modules\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194612    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-debugfs\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194776    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-modules" (OuterVolumeSpecName: "modules") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194806    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-bpffs" (OuterVolumeSpecName: "bpffs") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194818    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-debugfs" (OuterVolumeSpecName: "debugfs") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194853    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-cgroup\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194873    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-run\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194936    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4vm\" (UniqueName: \"kubernetes.io/projected/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-kube-api-access-sg4vm\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.194955    1200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-host\") pod \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\" (UID: \"fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a\") "
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195030    1200 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-modules\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195040    1200 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-bpffs\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195047    1200 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-debugfs\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195064    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-host" (OuterVolumeSpecName: "host") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195081    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-cgroup" (OuterVolumeSpecName: "cgroup") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.195094    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-run" (OuterVolumeSpecName: "run") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.201062    1200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-kube-api-access-sg4vm" (OuterVolumeSpecName: "kube-api-access-sg4vm") pod "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" (UID: "fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a"). InnerVolumeSpecName "kube-api-access-sg4vm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295528    1200 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-cgroup\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295562    1200 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-run\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295573    1200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sg4vm\" (UniqueName: \"kubernetes.io/projected/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-kube-api-access-sg4vm\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.295602    1200 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a-host\") on node \"addons-001438\" DevicePath \"\""
	Sep 16 10:27:08 addons-001438 kubelet[1200]: I0916 10:27:08.448138    1200 scope.go:117] "RemoveContainer" containerID="44134363b5c5efe09ae29ae4c7261f5f57e95ad84b0df54d22fab5c1a3cc278f"
	Sep 16 10:27:09 addons-001438 kubelet[1200]: I0916 10:27:09.843635    1200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a" path="/var/lib/kubelet/pods/fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a/volumes"
	
	
	==> storage-provisioner [20d2f3360f32056e9d5320e2e27b14e89f34121e47fe3ef6dc68434fd12cbe4e] <==
	I0916 10:22:41.307950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:22:41.369058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:22:41.369154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:22:41.391597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:22:41.391782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	I0916 10:22:41.394290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b3cde4-08a8-47d7-a9cc-7251679ab4d1", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b became leader
	I0916 10:22:41.492688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-001438_7c863089-ab00-4bae-802b-9e04d7461e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-001438 -n addons-001438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (379.883µs)
helpers_test.go:263: kubectl --context addons-001438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Yakd (122.28s)

                                                
                                    
x
+
TestCertOptions (48.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-087952 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-087952 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.066279034s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-087952 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-087952 config view
cert_options_test.go:88: (dbg) Non-zero exit: kubectl --context cert-options-087952 config view: fork/exec /usr/local/bin/kubectl: exec format error (519.448µs)
cert_options_test.go:90: failed to get kubectl config. args "kubectl --context cert-options-087952 config view" : fork/exec /usr/local/bin/kubectl: exec format error
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = ""
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-087952 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-16 11:32:22.059075696 +0000 UTC m=+4256.282194194
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-options-087952 -n cert-options-087952
helpers_test.go:244: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-087952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p cert-options-087952 logs -n 25: (1.04292476s)
helpers_test.go:252: TestCertOptions logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-957670 sudo find            | cilium-957670             | jenkins | v1.34.0 | 16 Sep 24 11:27 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-957670 sudo crio            | cilium-957670             | jenkins | v1.34.0 | 16 Sep 24 11:27 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-957670                      | cilium-957670             | jenkins | v1.34.0 | 16 Sep 24 11:27 UTC | 16 Sep 24 11:27 UTC |
	| start   | -p kubernetes-upgrade-045794          | kubernetes-upgrade-045794 | jenkins | v1.34.0 | 16 Sep 24 11:27 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:28 UTC | 16 Sep 24 11:29 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-650886                | offline-crio-650886       | jenkins | v1.34.0 | 16 Sep 24 11:28 UTC | 16 Sep 24 11:28 UTC |
	| start   | -p cert-expiration-849615             | cert-expiration-849615    | jenkins | v1.34.0 | 16 Sep 24 11:28 UTC | 16 Sep 24 11:29 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-682717             | running-upgrade-682717    | jenkins | v1.34.0 | 16 Sep 24 11:29 UTC | 16 Sep 24 11:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:29 UTC | 16 Sep 24 11:29 UTC |
	| start   | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:29 UTC | 16 Sep 24 11:30 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-668924 sudo           | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	| start   | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-682717             | running-upgrade-682717    | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	| start   | -p force-systemd-flag-716028          | force-systemd-flag-716028 | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:31 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-668924 sudo           | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	| start   | -p stopped-upgrade-153123             | minikube                  | jenkins | v1.26.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:32 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-716028 ssh cat     | force-systemd-flag-716028 | jenkins | v1.34.0 | 16 Sep 24 11:31 UTC | 16 Sep 24 11:31 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-716028          | force-systemd-flag-716028 | jenkins | v1.34.0 | 16 Sep 24 11:31 UTC | 16 Sep 24 11:31 UTC |
	| start   | -p cert-options-087952                | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:31 UTC | 16 Sep 24 11:32 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-153123 stop           | minikube                  | jenkins | v1.26.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	| start   | -p stopped-upgrade-153123             | stopped-upgrade-153123    | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-087952 ssh               | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-087952 -- sudo        | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:32:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:32:18.235704   52334 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:32:18.235815   52334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:32:18.235821   52334 out.go:358] Setting ErrFile to fd 2...
	I0916 11:32:18.235824   52334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:32:18.236023   52334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:32:18.236523   52334 out.go:352] Setting JSON to false
	I0916 11:32:18.237561   52334 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4488,"bootTime":1726481850,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:32:18.237664   52334 start.go:139] virtualization: kvm guest
	I0916 11:32:18.239634   52334 out.go:177] * [stopped-upgrade-153123] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:32:18.241089   52334 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:32:18.241147   52334 notify.go:220] Checking for updates...
	I0916 11:32:18.243406   52334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:32:18.244575   52334 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:32:18.245891   52334 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:32:18.247244   52334 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:32:18.248413   52334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:32:18.249998   52334 config.go:182] Loaded profile config "stopped-upgrade-153123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0916 11:32:18.250429   52334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:18.250497   52334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:18.265937   52334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0916 11:32:18.266442   52334 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:18.267047   52334 main.go:141] libmachine: Using API Version  1
	I0916 11:32:18.267071   52334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:18.267416   52334 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:18.267669   52334 main.go:141] libmachine: (stopped-upgrade-153123) Calling .DriverName
	I0916 11:32:18.269256   52334 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 11:32:18.270354   52334 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:32:18.270699   52334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:18.270738   52334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:18.285656   52334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0916 11:32:18.286121   52334 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:18.286629   52334 main.go:141] libmachine: Using API Version  1
	I0916 11:32:18.286657   52334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:18.287012   52334 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:18.287217   52334 main.go:141] libmachine: (stopped-upgrade-153123) Calling .DriverName
	I0916 11:32:18.323498   52334 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 11:32:18.324863   52334 start.go:297] selected driver: kvm2
	I0916 11:32:18.324880   52334 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-153123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-153
123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 11:32:18.324988   52334 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:32:18.325690   52334 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:32:18.325761   52334 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:32:18.341574   52334 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:32:18.342033   52334 cni.go:84] Creating CNI manager for ""
	I0916 11:32:18.342086   52334 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:32:18.342149   52334 start.go:340] cluster config:
	{Name:stopped-upgrade-153123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-153123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 11:32:18.342267   52334 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:32:18.344086   52334 out.go:177] * Starting "stopped-upgrade-153123" primary control-plane node in "stopped-upgrade-153123" cluster
	I0916 11:32:18.345412   52334 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0916 11:32:18.345465   52334 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:32:18.345481   52334 cache.go:56] Caching tarball of preloaded images
	I0916 11:32:18.345573   52334 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:32:18.345584   52334 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0916 11:32:18.345698   52334 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/stopped-upgrade-153123/config.json ...
	I0916 11:32:18.345924   52334 start.go:360] acquireMachinesLock for stopped-upgrade-153123: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:32:18.345986   52334 start.go:364] duration metric: took 40.843µs to acquireMachinesLock for "stopped-upgrade-153123"
	I0916 11:32:18.346002   52334 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:32:18.346010   52334 fix.go:54] fixHost starting: 
	I0916 11:32:18.346343   52334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:18.346382   52334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:18.361102   52334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0916 11:32:18.361527   52334 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:18.362013   52334 main.go:141] libmachine: Using API Version  1
	I0916 11:32:18.362035   52334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:18.362342   52334 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:18.362513   52334 main.go:141] libmachine: (stopped-upgrade-153123) Calling .DriverName
	I0916 11:32:18.362657   52334 main.go:141] libmachine: (stopped-upgrade-153123) Calling .GetState
	I0916 11:32:18.364625   52334 fix.go:112] recreateIfNeeded on stopped-upgrade-153123: state=Stopped err=<nil>
	I0916 11:32:18.364665   52334 main.go:141] libmachine: (stopped-upgrade-153123) Calling .DriverName
	W0916 11:32:18.364893   52334 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:32:18.366741   52334 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-153123" ...
	I0916 11:32:20.084582   51898 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:32:20.084648   51898 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:32:20.084760   51898 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:32:20.084871   51898 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:32:20.085014   51898 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:32:20.085102   51898 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:32:20.086935   51898 out.go:235]   - Generating certificates and keys ...
	I0916 11:32:20.087041   51898 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:32:20.087117   51898 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:32:20.087221   51898 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:32:20.087294   51898 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:32:20.087365   51898 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:32:20.087434   51898 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:32:20.087500   51898 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:32:20.087655   51898 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-options-087952 localhost] and IPs [192.168.83.250 127.0.0.1 ::1]
	I0916 11:32:20.087722   51898 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:32:20.087873   51898 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-options-087952 localhost] and IPs [192.168.83.250 127.0.0.1 ::1]
	I0916 11:32:20.087964   51898 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:32:20.088063   51898 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:32:20.088122   51898 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:32:20.088184   51898 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:32:20.088249   51898 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:32:20.088321   51898 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:32:20.088397   51898 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:32:20.088508   51898 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:32:20.088578   51898 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:32:20.088684   51898 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:32:20.088755   51898 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:32:20.090324   51898 out.go:235]   - Booting up control plane ...
	I0916 11:32:20.090435   51898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:32:20.090556   51898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:32:20.090649   51898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:32:20.090790   51898 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:32:20.090897   51898 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:32:20.090949   51898 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:32:20.091112   51898 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:32:20.091242   51898 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:32:20.091309   51898 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002175815s
	I0916 11:32:20.091380   51898 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:32:20.091453   51898 kubeadm.go:310] [api-check] The API server is healthy after 5.00208789s
	I0916 11:32:20.091588   51898 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:32:20.091710   51898 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:32:20.091756   51898 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:32:20.091996   51898 kubeadm.go:310] [mark-control-plane] Marking the node cert-options-087952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:32:20.092072   51898 kubeadm.go:310] [bootstrap-token] Using token: gst1um.9ux4ihgg8wmwce7t
	I0916 11:32:20.093723   51898 out.go:235]   - Configuring RBAC rules ...
	I0916 11:32:20.093860   51898 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:32:20.093935   51898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:32:20.094085   51898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:32:20.094262   51898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:32:20.094394   51898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:32:20.094461   51898 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:32:20.094589   51898 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:32:20.094645   51898 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:32:20.094684   51898 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:32:20.094687   51898 kubeadm.go:310] 
	I0916 11:32:20.094742   51898 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:32:20.094750   51898 kubeadm.go:310] 
	I0916 11:32:20.094823   51898 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:32:20.094825   51898 kubeadm.go:310] 
	I0916 11:32:20.094846   51898 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:32:20.094897   51898 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:32:20.094938   51898 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:32:20.094940   51898 kubeadm.go:310] 
	I0916 11:32:20.094983   51898 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:32:20.094986   51898 kubeadm.go:310] 
	I0916 11:32:20.095027   51898 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:32:20.095030   51898 kubeadm.go:310] 
	I0916 11:32:20.095071   51898 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:32:20.095131   51898 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:32:20.095185   51898 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:32:20.095187   51898 kubeadm.go:310] 
	I0916 11:32:20.095255   51898 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:32:20.095356   51898 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:32:20.095367   51898 kubeadm.go:310] 
	I0916 11:32:20.095484   51898 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8555 --token gst1um.9ux4ihgg8wmwce7t \
	I0916 11:32:20.095638   51898 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 11:32:20.095656   51898 kubeadm.go:310] 	--control-plane 
	I0916 11:32:20.095659   51898 kubeadm.go:310] 
	I0916 11:32:20.095727   51898 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:32:20.095730   51898 kubeadm.go:310] 
	I0916 11:32:20.095814   51898 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8555 --token gst1um.9ux4ihgg8wmwce7t \
	I0916 11:32:20.095934   51898 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:32:20.095940   51898 cni.go:84] Creating CNI manager for ""
	I0916 11:32:20.095947   51898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:32:20.097517   51898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 11:32:20.098850   51898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 11:32:20.114057   51898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 11:32:20.138891   51898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:32:20.138935   51898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:32:20.138992   51898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-087952 minikube.k8s.io/updated_at=2024_09_16T11_32_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=cert-options-087952 minikube.k8s.io/primary=true
	I0916 11:32:20.429288   51898 ops.go:34] apiserver oom_adj: -16
	I0916 11:32:20.429335   51898 kubeadm.go:1113] duration metric: took 290.463834ms to wait for elevateKubeSystemPrivileges
	I0916 11:32:20.429357   51898 kubeadm.go:394] duration metric: took 10.686889605s to StartCluster
	I0916 11:32:20.429376   51898 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:32:20.429456   51898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:32:20.430643   51898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:32:20.430947   51898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:32:20.430937   51898 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.250 Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:32:20.430994   51898 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:32:20.431082   51898 addons.go:69] Setting storage-provisioner=true in profile "cert-options-087952"
	I0916 11:32:20.431102   51898 addons.go:234] Setting addon storage-provisioner=true in "cert-options-087952"
	I0916 11:32:20.431099   51898 addons.go:69] Setting default-storageclass=true in profile "cert-options-087952"
	I0916 11:32:20.431130   51898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-087952"
	I0916 11:32:20.431136   51898 host.go:66] Checking if "cert-options-087952" exists ...
	I0916 11:32:20.431197   51898 config.go:182] Loaded profile config "cert-options-087952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:32:20.431541   51898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:20.431578   51898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:20.431579   51898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:20.431624   51898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:20.432454   51898 out.go:177] * Verifying Kubernetes components...
	I0916 11:32:20.434089   51898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:32:20.448568   51898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0916 11:32:20.449053   51898 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:20.449582   51898 main.go:141] libmachine: Using API Version  1
	I0916 11:32:20.449594   51898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:20.449980   51898 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:20.450187   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetState
	I0916 11:32:20.452234   51898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45367
	I0916 11:32:20.452578   51898 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:20.453005   51898 main.go:141] libmachine: Using API Version  1
	I0916 11:32:20.453015   51898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:20.453378   51898 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:20.453605   51898 addons.go:234] Setting addon default-storageclass=true in "cert-options-087952"
	I0916 11:32:20.453638   51898 host.go:66] Checking if "cert-options-087952" exists ...
	I0916 11:32:20.453924   51898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:20.453951   51898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:20.453984   51898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:20.454018   51898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:20.469654   51898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0916 11:32:20.470057   51898 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:20.470487   51898 main.go:141] libmachine: Using API Version  1
	I0916 11:32:20.470502   51898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:20.470929   51898 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:20.471144   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetState
	I0916 11:32:20.473037   51898 main.go:141] libmachine: (cert-options-087952) Calling .DriverName
	I0916 11:32:20.473600   51898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0916 11:32:20.474068   51898 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:20.474629   51898 main.go:141] libmachine: Using API Version  1
	I0916 11:32:20.474642   51898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:20.474947   51898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:32:20.475137   51898 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:20.475755   51898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:32:20.475785   51898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:32:20.477120   51898 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:32:20.477159   51898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:32:20.477180   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHHostname
	I0916 11:32:20.481458   51898 main.go:141] libmachine: (cert-options-087952) DBG | domain cert-options-087952 has defined MAC address 52:54:00:19:ca:5f in network mk-cert-options-087952
	I0916 11:32:20.481899   51898 main.go:141] libmachine: (cert-options-087952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:ca:5f", ip: ""} in network mk-cert-options-087952: {Iface:virbr3 ExpiryTime:2024-09-16 12:31:51 +0000 UTC Type:0 Mac:52:54:00:19:ca:5f Iaid: IPaddr:192.168.83.250 Prefix:24 Hostname:cert-options-087952 Clientid:01:52:54:00:19:ca:5f}
	I0916 11:32:20.481973   51898 main.go:141] libmachine: (cert-options-087952) DBG | domain cert-options-087952 has defined IP address 192.168.83.250 and MAC address 52:54:00:19:ca:5f in network mk-cert-options-087952
	I0916 11:32:20.482110   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHPort
	I0916 11:32:20.482296   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHKeyPath
	I0916 11:32:20.482421   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHUsername
	I0916 11:32:20.482576   51898 sshutil.go:53] new ssh client: &{IP:192.168.83.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/cert-options-087952/id_rsa Username:docker}
	I0916 11:32:20.493512   51898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0916 11:32:20.494182   51898 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:32:20.494871   51898 main.go:141] libmachine: Using API Version  1
	I0916 11:32:20.494906   51898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:32:20.495401   51898 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:32:20.495588   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetState
	I0916 11:32:20.497715   51898 main.go:141] libmachine: (cert-options-087952) Calling .DriverName
	I0916 11:32:20.497945   51898 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:32:20.497956   51898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:32:20.497974   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHHostname
	I0916 11:32:20.501358   51898 main.go:141] libmachine: (cert-options-087952) DBG | domain cert-options-087952 has defined MAC address 52:54:00:19:ca:5f in network mk-cert-options-087952
	I0916 11:32:20.501720   51898 main.go:141] libmachine: (cert-options-087952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:ca:5f", ip: ""} in network mk-cert-options-087952: {Iface:virbr3 ExpiryTime:2024-09-16 12:31:51 +0000 UTC Type:0 Mac:52:54:00:19:ca:5f Iaid: IPaddr:192.168.83.250 Prefix:24 Hostname:cert-options-087952 Clientid:01:52:54:00:19:ca:5f}
	I0916 11:32:20.501735   51898 main.go:141] libmachine: (cert-options-087952) DBG | domain cert-options-087952 has defined IP address 192.168.83.250 and MAC address 52:54:00:19:ca:5f in network mk-cert-options-087952
	I0916 11:32:20.501869   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHPort
	I0916 11:32:20.502056   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHKeyPath
	I0916 11:32:20.502199   51898 main.go:141] libmachine: (cert-options-087952) Calling .GetSSHUsername
	I0916 11:32:20.502307   51898 sshutil.go:53] new ssh client: &{IP:192.168.83.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/cert-options-087952/id_rsa Username:docker}
	I0916 11:32:20.702472   51898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:32:20.702512   51898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:32:20.860829   51898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:32:20.979173   51898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:32:21.120302   51898 start.go:971] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0916 11:32:21.120451   51898 main.go:141] libmachine: Making call to close driver server
	I0916 11:32:21.120465   51898 main.go:141] libmachine: (cert-options-087952) Calling .Close
	I0916 11:32:21.120786   51898 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:32:21.120795   51898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:32:21.120814   51898 main.go:141] libmachine: Making call to close driver server
	I0916 11:32:21.120821   51898 main.go:141] libmachine: (cert-options-087952) Calling .Close
	I0916 11:32:21.121196   51898 main.go:141] libmachine: (cert-options-087952) DBG | Closing plugin on server side
	I0916 11:32:21.121210   51898 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:32:21.121232   51898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:32:21.121502   51898 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:32:21.121541   51898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:32:21.140479   51898 main.go:141] libmachine: Making call to close driver server
	I0916 11:32:21.140495   51898 main.go:141] libmachine: (cert-options-087952) Calling .Close
	I0916 11:32:21.140786   51898 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:32:21.140799   51898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:32:21.462903   51898 api_server.go:72] duration metric: took 1.031936485s to wait for apiserver process to appear ...
	I0916 11:32:21.462920   51898 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:32:21.462941   51898 api_server.go:253] Checking apiserver healthz at https://192.168.83.250:8555/healthz ...
	I0916 11:32:21.463155   51898 main.go:141] libmachine: Making call to close driver server
	I0916 11:32:21.463170   51898 main.go:141] libmachine: (cert-options-087952) Calling .Close
	I0916 11:32:21.463450   51898 main.go:141] libmachine: (cert-options-087952) DBG | Closing plugin on server side
	I0916 11:32:21.463480   51898 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:32:21.463485   51898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:32:21.463497   51898 main.go:141] libmachine: Making call to close driver server
	I0916 11:32:21.463504   51898 main.go:141] libmachine: (cert-options-087952) Calling .Close
	I0916 11:32:21.463750   51898 main.go:141] libmachine: (cert-options-087952) DBG | Closing plugin on server side
	I0916 11:32:21.463804   51898 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:32:21.463813   51898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:32:21.465797   51898 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:32:21.467240   51898 addons.go:510] duration metric: took 1.036251376s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:32:21.471053   51898 api_server.go:279] https://192.168.83.250:8555/healthz returned 200:
	ok
	I0916 11:32:21.472157   51898 api_server.go:141] control plane version: v1.31.1
	I0916 11:32:21.472171   51898 api_server.go:131] duration metric: took 9.245378ms to wait for apiserver health ...
	I0916 11:32:21.472179   51898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:32:21.479635   51898 system_pods.go:59] 5 kube-system pods found
	I0916 11:32:21.479658   51898 system_pods.go:61] "etcd-cert-options-087952" [6ce7001b-97c8-4e39-94f1-479caa360277] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:32:21.479667   51898 system_pods.go:61] "kube-apiserver-cert-options-087952" [46bccd35-08ce-4516-95a9-7e27158de817] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:32:21.479675   51898 system_pods.go:61] "kube-controller-manager-cert-options-087952" [19ec9fd8-ddd1-422b-a768-3ffbe4711b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 11:32:21.479682   51898 system_pods.go:61] "kube-scheduler-cert-options-087952" [c7970004-ce62-4de1-bbca-0ce2cdc9738f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:32:21.479688   51898 system_pods.go:61] "storage-provisioner" [e64de385-03b3-426f-903a-694d8f0d4776] Pending
	I0916 11:32:21.479695   51898 system_pods.go:74] duration metric: took 7.510948ms to wait for pod list to return data ...
	I0916 11:32:21.479705   51898 kubeadm.go:582] duration metric: took 1.048744319s to wait for: map[apiserver:true system_pods:true]
	I0916 11:32:21.479718   51898 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:32:21.484362   51898 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:32:21.484378   51898 node_conditions.go:123] node cpu capacity is 2
	I0916 11:32:21.484388   51898 node_conditions.go:105] duration metric: took 4.666557ms to run NodePressure ...
	I0916 11:32:21.484400   51898 start.go:241] waiting for startup goroutines ...
	I0916 11:32:21.624507   51898 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-options-087952" context rescaled to 1 replicas
	I0916 11:32:21.624539   51898 start.go:246] waiting for cluster config update ...
	I0916 11:32:21.624554   51898 start.go:255] writing updated cluster config ...
	I0916 11:32:21.624860   51898 ssh_runner.go:195] Run: rm -f paused
	I0916 11:32:21.632527   51898 out.go:177] * Done! kubectl is now configured to use "cert-options-087952" cluster and "default" namespace by default
	E0916 11:32:21.633916   51898 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.709183789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486342709156667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13303706-5b5e-488e-b601-8f42b35a0566 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.709772877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edc6ccf9-f6b4-4508-9817-bda544287971 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.709847197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edc6ccf9-f6b4-4508-9817-bda544287971 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.709991094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50220a6920a1888e795289139cc4f36fd48984eaf11c8fedf4a660a911ab5643,PodSandboxId:6ffa5c9c8948c484e9d3b37ac7ef20eccc48316be276c299e14ddf59c8149b4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486333886038260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 782907242aca52a5f3e35aa45aa79a51,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbade9c30664eb233ba04332f311239acac4caab9a733abcd5fc5d6bb6736c6f,PodSandboxId:e87151b5a7d8a329b87c8a5bd2c6cf96ae531a758fadea244ad96dd14c85e84f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486333906797800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be7f4b957a29719ec6167f38699c2557,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e335143c533ba070530483a0098e1012cb8a94047804cb15adaaefa772f2405c,PodSandboxId:72cfc3122c311e29583beaaf0d943662b7b5bd61273c2ed0b7cd9e19be13855e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486333818484160,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ed0905812b7b4134b63b0e50ef84cf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e0a62de94a6ace2a37256225c6e3ca512477e2e92c1d48eae65d10ea66d07f,PodSandboxId:138b63965d71038dd0c83138e8cedcdc3f56411abf1262e7122ab77b6d7e0b9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486333839608848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df176f7d5d10e1e82e788017af9bbe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edc6ccf9-f6b4-4508-9817-bda544287971 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.756164490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c30e81f2-0b44-45df-90db-83bbf9a0d17a name=/runtime.v1.RuntimeService/Version
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.756262153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c30e81f2-0b44-45df-90db-83bbf9a0d17a name=/runtime.v1.RuntimeService/Version
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.759421108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68fb8f79-330a-40c8-97d2-a422b5f181de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.759913365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486342759888561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68fb8f79-330a-40c8-97d2-a422b5f181de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.760407824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=621f3e59-a728-4ce1-a6f1-afc582328254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.760460472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=621f3e59-a728-4ce1-a6f1-afc582328254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.760596235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50220a6920a1888e795289139cc4f36fd48984eaf11c8fedf4a660a911ab5643,PodSandboxId:6ffa5c9c8948c484e9d3b37ac7ef20eccc48316be276c299e14ddf59c8149b4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486333886038260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 782907242aca52a5f3e35aa45aa79a51,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbade9c30664eb233ba04332f311239acac4caab9a733abcd5fc5d6bb6736c6f,PodSandboxId:e87151b5a7d8a329b87c8a5bd2c6cf96ae531a758fadea244ad96dd14c85e84f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486333906797800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be7f4b957a29719ec6167f38699c2557,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e335143c533ba070530483a0098e1012cb8a94047804cb15adaaefa772f2405c,PodSandboxId:72cfc3122c311e29583beaaf0d943662b7b5bd61273c2ed0b7cd9e19be13855e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486333818484160,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ed0905812b7b4134b63b0e50ef84cf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e0a62de94a6ace2a37256225c6e3ca512477e2e92c1d48eae65d10ea66d07f,PodSandboxId:138b63965d71038dd0c83138e8cedcdc3f56411abf1262e7122ab77b6d7e0b9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486333839608848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df176f7d5d10e1e82e788017af9bbe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=621f3e59-a728-4ce1-a6f1-afc582328254 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.802081723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b316d38-81ee-4bee-a32b-8a49649b2645 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.802169007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b316d38-81ee-4bee-a32b-8a49649b2645 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.803459973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e1506f3-0d11-4772-8fd7-97908ca58780 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.803915662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486342803893851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e1506f3-0d11-4772-8fd7-97908ca58780 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.804661275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f4a84a5-f254-4fb2-a854-56c4672a74d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.804751315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f4a84a5-f254-4fb2-a854-56c4672a74d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.804872925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50220a6920a1888e795289139cc4f36fd48984eaf11c8fedf4a660a911ab5643,PodSandboxId:6ffa5c9c8948c484e9d3b37ac7ef20eccc48316be276c299e14ddf59c8149b4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486333886038260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 782907242aca52a5f3e35aa45aa79a51,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbade9c30664eb233ba04332f311239acac4caab9a733abcd5fc5d6bb6736c6f,PodSandboxId:e87151b5a7d8a329b87c8a5bd2c6cf96ae531a758fadea244ad96dd14c85e84f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486333906797800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be7f4b957a29719ec6167f38699c2557,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e335143c533ba070530483a0098e1012cb8a94047804cb15adaaefa772f2405c,PodSandboxId:72cfc3122c311e29583beaaf0d943662b7b5bd61273c2ed0b7cd9e19be13855e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486333818484160,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ed0905812b7b4134b63b0e50ef84cf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e0a62de94a6ace2a37256225c6e3ca512477e2e92c1d48eae65d10ea66d07f,PodSandboxId:138b63965d71038dd0c83138e8cedcdc3f56411abf1262e7122ab77b6d7e0b9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486333839608848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df176f7d5d10e1e82e788017af9bbe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f4a84a5-f254-4fb2-a854-56c4672a74d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.841744813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e16b65bf-07cf-478c-86d2-4bbcb2c57a7d name=/runtime.v1.RuntimeService/Version
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.841887114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e16b65bf-07cf-478c-86d2-4bbcb2c57a7d name=/runtime.v1.RuntimeService/Version
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.843414849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=487db7ce-4cf6-4a25-9c82-0568370f0f83 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.843932417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486342843908997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=487db7ce-4cf6-4a25-9c82-0568370f0f83 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.844546127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67f3e7a3-3d4c-4717-8698-3336f02991b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.844593498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67f3e7a3-3d4c-4717-8698-3336f02991b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:32:22 cert-options-087952 crio[663]: time="2024-09-16 11:32:22.844785518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50220a6920a1888e795289139cc4f36fd48984eaf11c8fedf4a660a911ab5643,PodSandboxId:6ffa5c9c8948c484e9d3b37ac7ef20eccc48316be276c299e14ddf59c8149b4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486333886038260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 782907242aca52a5f3e35aa45aa79a51,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbade9c30664eb233ba04332f311239acac4caab9a733abcd5fc5d6bb6736c6f,PodSandboxId:e87151b5a7d8a329b87c8a5bd2c6cf96ae531a758fadea244ad96dd14c85e84f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486333906797800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be7f4b957a29719ec6167f38699c2557,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e335143c533ba070530483a0098e1012cb8a94047804cb15adaaefa772f2405c,PodSandboxId:72cfc3122c311e29583beaaf0d943662b7b5bd61273c2ed0b7cd9e19be13855e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486333818484160,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0ed0905812b7b4134b63b0e50ef84cf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e0a62de94a6ace2a37256225c6e3ca512477e2e92c1d48eae65d10ea66d07f,PodSandboxId:138b63965d71038dd0c83138e8cedcdc3f56411abf1262e7122ab77b6d7e0b9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486333839608848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-options-087952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df176f7d5d10e1e82e788017af9bbe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67f3e7a3-3d4c-4717-8698-3336f02991b5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dbade9c30664e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 seconds ago       Running             kube-scheduler            0                   e87151b5a7d8a       kube-scheduler-cert-options-087952
	50220a6920a18       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago       Running             etcd                      0                   6ffa5c9c8948c       etcd-cert-options-087952
	c0e0a62de94a6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 seconds ago       Running             kube-apiserver            0                   138b63965d710       kube-apiserver-cert-options-087952
	e335143c533ba       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 seconds ago       Running             kube-controller-manager   0                   72cfc3122c311       kube-controller-manager-cert-options-087952
	
	
	==> describe nodes <==
	Name:               cert-options-087952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-087952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=cert-options-087952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_32_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:32:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-087952
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:32:20 +0000   Mon, 16 Sep 2024 11:32:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:32:20 +0000   Mon, 16 Sep 2024 11:32:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:32:20 +0000   Mon, 16 Sep 2024 11:32:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:32:20 +0000   Mon, 16 Sep 2024 11:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.250
	  Hostname:    cert-options-087952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 17ad5061d8414aa2884ecf131a6b61ab
	  System UUID:                17ad5061-d841-4aa2-884e-cf131a6b61ab
	  Boot ID:                    835bf896-3e7b-4639-bbd2-21d73d921562
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-cert-options-087952                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         4s
	  kube-system                 kube-apiserver-cert-options-087952             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-cert-options-087952    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-cert-options-087952             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (5%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node cert-options-087952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node cert-options-087952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node cert-options-087952 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s    kubelet  Node cert-options-087952 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 11:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049348] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040059] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.211045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.537990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.959855] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep16 11:32] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.068840] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080709] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.191356] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.173715] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.320062] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.238825] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +0.061993] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.883277] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +1.196502] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.377339] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.118299] kauditd_printk_skb: 30 callbacks suppressed
	[  +1.279251] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	
	
	==> etcd [50220a6920a1888e795289139cc4f36fd48984eaf11c8fedf4a660a911ab5643] <==
	{"level":"info","ts":"2024-09-16T11:32:14.295085Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:32:14.295508Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6d8f8117ba8533d9","initial-advertise-peer-urls":["https://192.168.83.250:2380"],"listen-peer-urls":["https://192.168.83.250:2380"],"advertise-client-urls":["https://192.168.83.250:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.250:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:32:14.295618Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:32:14.295795Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.83.250:2380"}
	{"level":"info","ts":"2024-09-16T11:32:14.295861Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.83.250:2380"}
	{"level":"info","ts":"2024-09-16T11:32:15.024482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d8f8117ba8533d9 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:32:15.024569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d8f8117ba8533d9 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:32:15.024622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d8f8117ba8533d9 received MsgPreVoteResp from 6d8f8117ba8533d9 at term 1"}
	{"level":"info","ts":"2024-09-16T11:32:15.024657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d8f8117ba8533d9 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:32:15.024745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d8f8117ba8533d9 received MsgVoteResp from 6d8f8117ba8533d9 at term 2"}
	{"level":"info","ts":"2024-09-16T11:32:15.024775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6d8f8117ba8533d9 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:32:15.024800Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6d8f8117ba8533d9 elected leader 6d8f8117ba8533d9 at term 2"}
	{"level":"info","ts":"2024-09-16T11:32:15.032021Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6d8f8117ba8533d9","local-member-attributes":"{Name:cert-options-087952 ClientURLs:[https://192.168.83.250:2379]}","request-path":"/0/members/6d8f8117ba8533d9/attributes","cluster-id":"86753b42032e8da9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:32:15.032221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:32:15.033044Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:32:15.034727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:32:15.037634Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:32:15.040741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:32:15.040784Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:32:15.041520Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:32:15.044215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.250:2379"}
	{"level":"info","ts":"2024-09-16T11:32:15.045846Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"86753b42032e8da9","local-member-id":"6d8f8117ba8533d9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:32:15.047487Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:32:15.047559Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:32:15.046760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:32:23 up 0 min,  0 users,  load average: 0.92, 0.21, 0.07
	Linux cert-options-087952 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c0e0a62de94a6ace2a37256225c6e3ca512477e2e92c1d48eae65d10ea66d07f] <==
	I0916 11:32:16.821322       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:32:16.821432       1 policy_source.go:224] refreshing policies
	E0916 11:32:16.823663       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:32:16.847526       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:32:16.849105       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:32:16.849188       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:32:16.849213       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:32:16.849317       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:32:16.849378       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:32:16.849415       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:32:16.854504       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:32:17.027508       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:32:17.657144       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:32:17.663016       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:32:17.663046       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:32:18.265489       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:32:18.334427       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:32:18.462587       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:32:18.472010       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.250]
	I0916 11:32:18.473238       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:32:18.477884       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:32:18.752462       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:32:19.460240       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:32:19.486464       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:32:19.506184       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [e335143c533ba070530483a0098e1012cb8a94047804cb15adaaefa772f2405c] <==
	I0916 11:32:22.240939       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0916 11:32:22.240995       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0916 11:32:22.290466       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0916 11:32:22.290542       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I0916 11:32:22.290639       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I0916 11:32:22.290805       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I0916 11:32:22.290821       1 shared_informer.go:313] Waiting for caches to sync for taint
	E0916 11:32:22.442385       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0916 11:32:22.442443       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I0916 11:32:22.592101       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0916 11:32:22.592166       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0916 11:32:22.592175       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0916 11:32:22.743120       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I0916 11:32:22.743220       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I0916 11:32:22.743235       1 shared_informer.go:313] Waiting for caches to sync for job
	I0916 11:32:22.892781       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I0916 11:32:22.892917       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0916 11:32:22.892934       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0916 11:32:23.090654       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I0916 11:32:23.091287       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0916 11:32:23.091327       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0916 11:32:23.091340       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0916 11:32:23.243006       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I0916 11:32:23.243158       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0916 11:32:23.243172       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	
	
	==> kube-scheduler [dbade9c30664eb233ba04332f311239acac4caab9a733abcd5fc5d6bb6736c6f] <==
	W0916 11:32:16.804234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:32:16.806158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:16.804274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:32:16.806275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:16.804321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:32:16.806339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:16.804370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:32:16.806419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:16.811888       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:32:16.811940       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:32:17.695662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:32:17.695771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:17.740300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:32:17.740354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:17.777142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:32:17.777195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:17.872257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:32:17.872389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:17.881581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:32:17.881867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:32:17.948071       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:32:17.948133       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:32:18.024101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:32:18.024168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 11:32:20.096066       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: E0916 11:32:19.485481    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486339483128700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: E0916 11:32:19.534303    1214 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-options-087952\" already exists" pod="kube-system/kube-scheduler-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.603866    1214 kubelet_node_status.go:72] "Attempting to register node" node="cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.625058    1214 kubelet_node_status.go:111] "Node was previously registered" node="cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.625319    1214 kubelet_node_status.go:75] "Successfully registered node" node="cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695081    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df176f7d5d10e1e82e788017af9bbe7b-usr-share-ca-certificates\") pod \"kube-apiserver-cert-options-087952\" (UID: \"df176f7d5d10e1e82e788017af9bbe7b\") " pod="kube-system/kube-apiserver-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695171    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0ed0905812b7b4134b63b0e50ef84cf-ca-certs\") pod \"kube-controller-manager-cert-options-087952\" (UID: \"b0ed0905812b7b4134b63b0e50ef84cf\") " pod="kube-system/kube-controller-manager-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695205    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be7f4b957a29719ec6167f38699c2557-kubeconfig\") pod \"kube-scheduler-cert-options-087952\" (UID: \"be7f4b957a29719ec6167f38699c2557\") " pod="kube-system/kube-scheduler-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695230    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df176f7d5d10e1e82e788017af9bbe7b-k8s-certs\") pod \"kube-apiserver-cert-options-087952\" (UID: \"df176f7d5d10e1e82e788017af9bbe7b\") " pod="kube-system/kube-apiserver-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695259    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b0ed0905812b7b4134b63b0e50ef84cf-flexvolume-dir\") pod \"kube-controller-manager-cert-options-087952\" (UID: \"b0ed0905812b7b4134b63b0e50ef84cf\") " pod="kube-system/kube-controller-manager-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695283    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0ed0905812b7b4134b63b0e50ef84cf-k8s-certs\") pod \"kube-controller-manager-cert-options-087952\" (UID: \"b0ed0905812b7b4134b63b0e50ef84cf\") " pod="kube-system/kube-controller-manager-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695320    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0ed0905812b7b4134b63b0e50ef84cf-kubeconfig\") pod \"kube-controller-manager-cert-options-087952\" (UID: \"b0ed0905812b7b4134b63b0e50ef84cf\") " pod="kube-system/kube-controller-manager-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695341    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0ed0905812b7b4134b63b0e50ef84cf-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-options-087952\" (UID: \"b0ed0905812b7b4134b63b0e50ef84cf\") " pod="kube-system/kube-controller-manager-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695369    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/782907242aca52a5f3e35aa45aa79a51-etcd-certs\") pod \"etcd-cert-options-087952\" (UID: \"782907242aca52a5f3e35aa45aa79a51\") " pod="kube-system/etcd-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695390    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/782907242aca52a5f3e35aa45aa79a51-etcd-data\") pod \"etcd-cert-options-087952\" (UID: \"782907242aca52a5f3e35aa45aa79a51\") " pod="kube-system/etcd-cert-options-087952"
	Sep 16 11:32:19 cert-options-087952 kubelet[1214]: I0916 11:32:19.695409    1214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df176f7d5d10e1e82e788017af9bbe7b-ca-certs\") pod \"kube-apiserver-cert-options-087952\" (UID: \"df176f7d5d10e1e82e788017af9bbe7b\") " pod="kube-system/kube-apiserver-cert-options-087952"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.361850    1214 apiserver.go:52] "Watching apiserver"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.394254    1214 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: E0916 11:32:20.558806    1214 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-cert-options-087952\" already exists" pod="kube-system/etcd-cert-options-087952"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: E0916 11:32:20.559078    1214 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-options-087952\" already exists" pod="kube-system/kube-apiserver-cert-options-087952"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.577961    1214 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.617268    1214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-cert-options-087952" podStartSLOduration=1.617239522 podStartE2EDuration="1.617239522s" podCreationTimestamp="2024-09-16 11:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:32:20.617069813 +0000 UTC m=+1.354819559" watchObservedRunningTime="2024-09-16 11:32:20.617239522 +0000 UTC m=+1.354989254"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.617415    1214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-options-087952" podStartSLOduration=2.617408839 podStartE2EDuration="2.617408839s" podCreationTimestamp="2024-09-16 11:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:32:20.572269422 +0000 UTC m=+1.310019169" watchObservedRunningTime="2024-09-16 11:32:20.617408839 +0000 UTC m=+1.355158586"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.678055    1214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-options-087952" podStartSLOduration=1.6780310790000001 podStartE2EDuration="1.678031079s" podCreationTimestamp="2024-09-16 11:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:32:20.647619547 +0000 UTC m=+1.385369310" watchObservedRunningTime="2024-09-16 11:32:20.678031079 +0000 UTC m=+1.415780829"
	Sep 16 11:32:20 cert-options-087952 kubelet[1214]: I0916 11:32:20.712252    1214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-cert-options-087952" podStartSLOduration=1.712231998 podStartE2EDuration="1.712231998s" podCreationTimestamp="2024-09-16 11:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:32:20.680816665 +0000 UTC m=+1.418566414" watchObservedRunningTime="2024-09-16 11:32:20.712231998 +0000 UTC m=+1.449981751"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-options-087952 -n cert-options-087952
helpers_test.go:261: (dbg) Run:  kubectl --context cert-options-087952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context cert-options-087952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (592.698µs)
helpers_test.go:263: kubectl --context cert-options-087952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "cert-options-087952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-087952
--- FAIL: TestCertOptions (48.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: fork/exec /usr/local/bin/kubectl: exec format error (463.066µs)
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:687: expected current-context = "functional-553844", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/serial/KubeContext FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubeContext]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (1.416741371s)
helpers_test.go:252: TestFunctional/serial/KubeContext logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | addons-001438 addons disable   | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | addons-001438 addons           | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop    | -p addons-001438               | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:32 UTC |
	| addons  | enable dashboard -p            | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| delete  | -p addons-001438               | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| start   | -p nospam-263701 -n=1          | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-263701   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC |                     |
	|         | /tmp/nospam-263701 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC |                     |
	|         | /tmp/nospam-263701 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC |                     |
	|         | /tmp/nospam-263701 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 pause       |                   |         |         |                     |                     |
	| pause   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 pause       |                   |         |         |                     |                     |
	| pause   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 pause       |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop        |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop        |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-263701               | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	| start   | -p functional-553844           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:34 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-553844           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:35 UTC |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:34:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:34:38.077439   17646 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:34:38.077542   17646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:38.077549   17646 out.go:358] Setting ErrFile to fd 2...
	I0916 10:34:38.077553   17646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:38.077744   17646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:34:38.078240   17646 out.go:352] Setting JSON to false
	I0916 10:34:38.079125   17646 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1028,"bootTime":1726481850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:34:38.079218   17646 start.go:139] virtualization: kvm guest
	I0916 10:34:38.081269   17646 out.go:177] * [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:34:38.082653   17646 notify.go:220] Checking for updates...
	I0916 10:34:38.082693   17646 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:34:38.084064   17646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:34:38.085453   17646 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:34:38.086964   17646 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:34:38.088245   17646 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:34:38.089480   17646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:34:38.091189   17646 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:38.091271   17646 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:34:38.091718   17646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:34:38.091758   17646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:34:38.106583   17646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0916 10:34:38.107005   17646 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:34:38.107759   17646 main.go:141] libmachine: Using API Version  1
	I0916 10:34:38.107779   17646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:34:38.108182   17646 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:34:38.108417   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:38.143506   17646 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:34:38.144858   17646 start.go:297] selected driver: kvm2
	I0916 10:34:38.144879   17646 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:38.144991   17646 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:34:38.145360   17646 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:34:38.145438   17646 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:34:38.160331   17646 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:34:38.160977   17646 cni.go:84] Creating CNI manager for ""
	I0916 10:34:38.161032   17646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:34:38.161088   17646 start.go:340] cluster config:
	{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:38.161230   17646 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:34:38.163098   17646 out.go:177] * Starting "functional-553844" primary control-plane node in "functional-553844" cluster
	I0916 10:34:38.164351   17646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:34:38.164388   17646 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:34:38.164398   17646 cache.go:56] Caching tarball of preloaded images
	I0916 10:34:38.164466   17646 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:34:38.164475   17646 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:34:38.164556   17646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/config.json ...
	I0916 10:34:38.164739   17646 start.go:360] acquireMachinesLock for functional-553844: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:34:38.164779   17646 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "functional-553844"
	I0916 10:34:38.164792   17646 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:34:38.164799   17646 fix.go:54] fixHost starting: 
	I0916 10:34:38.165073   17646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:34:38.165103   17646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:34:38.179236   17646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0916 10:34:38.179758   17646 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:34:38.180227   17646 main.go:141] libmachine: Using API Version  1
	I0916 10:34:38.180247   17646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:34:38.180560   17646 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:34:38.180709   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:38.180847   17646 main.go:141] libmachine: (functional-553844) Calling .GetState
	I0916 10:34:38.182307   17646 fix.go:112] recreateIfNeeded on functional-553844: state=Running err=<nil>
	W0916 10:34:38.182334   17646 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:34:38.184116   17646 out.go:177] * Updating the running kvm2 "functional-553844" VM ...
	I0916 10:34:38.185307   17646 machine.go:93] provisionDockerMachine start ...
	I0916 10:34:38.185326   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:38.185506   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.187626   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.187927   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.187950   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.188086   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.188251   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.188405   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.188519   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.188671   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:38.188843   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:38.188857   17646 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:34:38.297498   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:34:38.297530   17646 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:34:38.297794   17646 buildroot.go:166] provisioning hostname "functional-553844"
	I0916 10:34:38.297825   17646 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:34:38.298016   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.300725   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.301057   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.301088   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.301225   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.301390   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.301552   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.301675   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.301825   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:38.301989   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:38.302001   17646 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-553844 && echo "functional-553844" | sudo tee /etc/hostname
	I0916 10:34:38.424960   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:34:38.424988   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.427581   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.427896   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.427924   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.428065   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.428258   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.428366   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.428491   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.428669   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:38.428884   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:38.428907   17646 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553844/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:34:38.538121   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:34:38.538155   17646 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:34:38.538194   17646 buildroot.go:174] setting up certificates
	I0916 10:34:38.538205   17646 provision.go:84] configureAuth start
	I0916 10:34:38.538215   17646 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:34:38.538466   17646 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:34:38.540938   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.541247   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.541278   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.541369   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.543545   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.543884   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.543925   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.544016   17646 provision.go:143] copyHostCerts
	I0916 10:34:38.544046   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:34:38.544079   17646 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:34:38.544093   17646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:34:38.544168   17646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:34:38.544277   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:34:38.544302   17646 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:34:38.544310   17646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:34:38.544335   17646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:34:38.544406   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:34:38.544429   17646 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:34:38.544438   17646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:34:38.544470   17646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:34:38.544547   17646 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.functional-553844 san=[127.0.0.1 192.168.39.230 functional-553844 localhost minikube]
	I0916 10:34:38.847217   17646 provision.go:177] copyRemoteCerts
	I0916 10:34:38.847294   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:34:38.847346   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.849820   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.850114   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.850141   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.850337   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.850521   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.850686   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.850821   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:38.936570   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:34:38.936641   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:34:38.965490   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:34:38.965558   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:34:38.994515   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:34:38.994585   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:34:39.023350   17646 provision.go:87] duration metric: took 485.133127ms to configureAuth
	I0916 10:34:39.023373   17646 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:34:39.023521   17646 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:39.023586   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:39.026305   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:39.026605   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:39.026634   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:39.026800   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:39.026979   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:39.027126   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:39.027207   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:39.027331   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:39.027485   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:39.027502   17646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:34:44.559214   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:34:44.559244   17646 machine.go:96] duration metric: took 6.373924238s to provisionDockerMachine
	I0916 10:34:44.559258   17646 start.go:293] postStartSetup for "functional-553844" (driver="kvm2")
	I0916 10:34:44.559271   17646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:34:44.559293   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.559630   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:34:44.559656   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.562588   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.562954   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.562985   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.563239   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.563424   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.563606   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.563780   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:44.648160   17646 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:34:44.652463   17646 command_runner.go:130] > NAME=Buildroot
	I0916 10:34:44.652481   17646 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 10:34:44.652485   17646 command_runner.go:130] > ID=buildroot
	I0916 10:34:44.652490   17646 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 10:34:44.652497   17646 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 10:34:44.652658   17646 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:34:44.652680   17646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:34:44.652777   17646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:34:44.652876   17646 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:34:44.652886   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:34:44.652968   17646 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> hosts in /etc/test/nested/copy/11203
	I0916 10:34:44.652978   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> /etc/test/nested/copy/11203/hosts
	I0916 10:34:44.653023   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11203
	I0916 10:34:44.662633   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:34:44.687556   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts --> /etc/test/nested/copy/11203/hosts (40 bytes)
	I0916 10:34:44.710968   17646 start.go:296] duration metric: took 151.696977ms for postStartSetup
	I0916 10:34:44.711001   17646 fix.go:56] duration metric: took 6.546202275s for fixHost
	I0916 10:34:44.711032   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.713557   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.713866   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.713899   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.714055   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.714240   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.714371   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.714476   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.714621   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:44.714829   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:44.714840   17646 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:34:44.821900   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482884.813574839
	
	I0916 10:34:44.821921   17646 fix.go:216] guest clock: 1726482884.813574839
	I0916 10:34:44.821928   17646 fix.go:229] Guest: 2024-09-16 10:34:44.813574839 +0000 UTC Remote: 2024-09-16 10:34:44.711005113 +0000 UTC m=+6.670369347 (delta=102.569726ms)
	I0916 10:34:44.821964   17646 fix.go:200] guest clock delta is within tolerance: 102.569726ms
	I0916 10:34:44.821973   17646 start.go:83] releasing machines lock for "functional-553844", held for 6.657185342s
	I0916 10:34:44.821994   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.822279   17646 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:34:44.825000   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.825343   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.825372   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.825505   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.825984   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.826163   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.826218   17646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:34:44.826272   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.826336   17646 ssh_runner.go:195] Run: cat /version.json
	I0916 10:34:44.826360   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.828843   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.828894   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.829188   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.829217   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.829338   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.829349   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.829364   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.829517   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.829527   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.829649   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.829707   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.829787   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:44.829810   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.829933   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:44.905672   17646 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 10:34:44.905864   17646 ssh_runner.go:195] Run: systemctl --version
	I0916 10:34:44.930168   17646 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:34:44.930247   17646 command_runner.go:130] > systemd 252 (252)
	I0916 10:34:44.930279   17646 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 10:34:44.930332   17646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:34:45.078495   17646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:34:45.086261   17646 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 10:34:45.086307   17646 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:34:45.086372   17646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:34:45.095896   17646 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:34:45.095914   17646 start.go:495] detecting cgroup driver to use...
	I0916 10:34:45.095972   17646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:34:45.111929   17646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:34:45.126331   17646 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:34:45.126393   17646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:34:45.140856   17646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:34:45.155306   17646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:34:45.287963   17646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:34:45.419203   17646 docker.go:233] disabling docker service ...
	I0916 10:34:45.419281   17646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:34:45.436187   17646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:34:45.450036   17646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:34:45.606742   17646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:34:45.749840   17646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:34:45.764656   17646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:34:45.783532   17646 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:34:45.783584   17646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:34:45.783631   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.794960   17646 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:34:45.795027   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.806657   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.817937   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.828872   17646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:34:45.839918   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.851537   17646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.862100   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.873482   17646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:34:45.883775   17646 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:34:45.883842   17646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:34:45.893484   17646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:46.025442   17646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:34:53.718838   17646 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.69335782s)
	I0916 10:34:53.718869   17646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:34:53.718910   17646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:34:53.723871   17646 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:34:53.723895   17646 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:34:53.723904   17646 command_runner.go:130] > Device: 0,22	Inode: 1215        Links: 1
	I0916 10:34:53.723913   17646 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:34:53.723921   17646 command_runner.go:130] > Access: 2024-09-16 10:34:53.691572356 +0000
	I0916 10:34:53.723930   17646 command_runner.go:130] > Modify: 2024-09-16 10:34:53.596569598 +0000
	I0916 10:34:53.723940   17646 command_runner.go:130] > Change: 2024-09-16 10:34:53.596569598 +0000
	I0916 10:34:53.723948   17646 command_runner.go:130] >  Birth: -
	I0916 10:34:53.724041   17646 start.go:563] Will wait 60s for crictl version
	I0916 10:34:53.724100   17646 ssh_runner.go:195] Run: which crictl
	I0916 10:34:53.727843   17646 command_runner.go:130] > /usr/bin/crictl
	I0916 10:34:53.727908   17646 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:34:53.762394   17646 command_runner.go:130] > Version:  0.1.0
	I0916 10:34:53.762417   17646 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:34:53.762424   17646 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 10:34:53.762432   17646 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:34:53.763582   17646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:34:53.763652   17646 ssh_runner.go:195] Run: crio --version
	I0916 10:34:53.791280   17646 command_runner.go:130] > crio version 1.29.1
	I0916 10:34:53.791299   17646 command_runner.go:130] > Version:        1.29.1
	I0916 10:34:53.791308   17646 command_runner.go:130] > GitCommit:      unknown
	I0916 10:34:53.791313   17646 command_runner.go:130] > GitCommitDate:  unknown
	I0916 10:34:53.791318   17646 command_runner.go:130] > GitTreeState:   clean
	I0916 10:34:53.791326   17646 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 10:34:53.791332   17646 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 10:34:53.791338   17646 command_runner.go:130] > Compiler:       gc
	I0916 10:34:53.791346   17646 command_runner.go:130] > Platform:       linux/amd64
	I0916 10:34:53.791353   17646 command_runner.go:130] > Linkmode:       dynamic
	I0916 10:34:53.791370   17646 command_runner.go:130] > BuildTags:      
	I0916 10:34:53.791380   17646 command_runner.go:130] >   containers_image_ostree_stub
	I0916 10:34:53.791388   17646 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 10:34:53.791394   17646 command_runner.go:130] >   btrfs_noversion
	I0916 10:34:53.791404   17646 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 10:34:53.791412   17646 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 10:34:53.791420   17646 command_runner.go:130] >   seccomp
	I0916 10:34:53.791428   17646 command_runner.go:130] > LDFlags:          unknown
	I0916 10:34:53.791436   17646 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:34:53.791443   17646 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:34:53.792548   17646 ssh_runner.go:195] Run: crio --version
	I0916 10:34:53.819305   17646 command_runner.go:130] > crio version 1.29.1
	I0916 10:34:53.819321   17646 command_runner.go:130] > Version:        1.29.1
	I0916 10:34:53.819329   17646 command_runner.go:130] > GitCommit:      unknown
	I0916 10:34:53.819335   17646 command_runner.go:130] > GitCommitDate:  unknown
	I0916 10:34:53.819341   17646 command_runner.go:130] > GitTreeState:   clean
	I0916 10:34:53.819348   17646 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 10:34:53.819355   17646 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 10:34:53.819362   17646 command_runner.go:130] > Compiler:       gc
	I0916 10:34:53.819371   17646 command_runner.go:130] > Platform:       linux/amd64
	I0916 10:34:53.819380   17646 command_runner.go:130] > Linkmode:       dynamic
	I0916 10:34:53.819390   17646 command_runner.go:130] > BuildTags:      
	I0916 10:34:53.819400   17646 command_runner.go:130] >   containers_image_ostree_stub
	I0916 10:34:53.819411   17646 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 10:34:53.819419   17646 command_runner.go:130] >   btrfs_noversion
	I0916 10:34:53.819430   17646 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 10:34:53.819440   17646 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 10:34:53.819447   17646 command_runner.go:130] >   seccomp
	I0916 10:34:53.819456   17646 command_runner.go:130] > LDFlags:          unknown
	I0916 10:34:53.819464   17646 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:34:53.819473   17646 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:34:53.822587   17646 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:34:53.823899   17646 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:34:53.826566   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:53.826950   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:53.826979   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:53.827150   17646 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:34:53.831424   17646 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 10:34:53.831646   17646 kubeadm.go:883] updating cluster {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:34:53.831762   17646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:34:53.831807   17646 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:34:53.873326   17646 command_runner.go:130] > {
	I0916 10:34:53.873355   17646 command_runner.go:130] >   "images": [
	I0916 10:34:53.873361   17646 command_runner.go:130] >     {
	I0916 10:34:53.873373   17646 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:34:53.873381   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873392   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:34:53.873398   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873405   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873418   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:34:53.873468   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:34:53.873480   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873486   17646 command_runner.go:130] >       "size": "87190579",
	I0916 10:34:53.873493   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.873503   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.873514   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873522   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873530   17646 command_runner.go:130] >     },
	I0916 10:34:53.873535   17646 command_runner.go:130] >     {
	I0916 10:34:53.873547   17646 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:34:53.873557   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873567   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:34:53.873574   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873584   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873600   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:34:53.873624   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:34:53.873634   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873644   17646 command_runner.go:130] >       "size": "31470524",
	I0916 10:34:53.873653   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.873663   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.873672   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873683   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873692   17646 command_runner.go:130] >     },
	I0916 10:34:53.873699   17646 command_runner.go:130] >     {
	I0916 10:34:53.873709   17646 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:34:53.873718   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873727   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:34:53.873735   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873741   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873758   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:34:53.873772   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:34:53.873779   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873788   17646 command_runner.go:130] >       "size": "63273227",
	I0916 10:34:53.873795   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.873804   17646 command_runner.go:130] >       "username": "nonroot",
	I0916 10:34:53.873812   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873822   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873830   17646 command_runner.go:130] >     },
	I0916 10:34:53.873835   17646 command_runner.go:130] >     {
	I0916 10:34:53.873846   17646 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:34:53.873855   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873865   17646 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:34:53.873873   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873881   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873891   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:34:53.873907   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:34:53.873915   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873921   17646 command_runner.go:130] >       "size": "149009664",
	I0916 10:34:53.873930   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.873939   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.873947   17646 command_runner.go:130] >       },
	I0916 10:34:53.873955   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.873964   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873974   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873980   17646 command_runner.go:130] >     },
	I0916 10:34:53.873989   17646 command_runner.go:130] >     {
	I0916 10:34:53.874000   17646 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:34:53.874010   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874021   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:34:53.874030   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874039   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874054   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:34:53.874076   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:34:53.874085   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874093   17646 command_runner.go:130] >       "size": "95237600",
	I0916 10:34:53.874100   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874107   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.874115   17646 command_runner.go:130] >       },
	I0916 10:34:53.874121   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874130   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874140   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874149   17646 command_runner.go:130] >     },
	I0916 10:34:53.874157   17646 command_runner.go:130] >     {
	I0916 10:34:53.874166   17646 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:34:53.874174   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874184   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:34:53.874192   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874201   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874217   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:34:53.874233   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:34:53.874242   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874251   17646 command_runner.go:130] >       "size": "89437508",
	I0916 10:34:53.874258   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874265   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.874272   17646 command_runner.go:130] >       },
	I0916 10:34:53.874281   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874289   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874299   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874307   17646 command_runner.go:130] >     },
	I0916 10:34:53.874314   17646 command_runner.go:130] >     {
	I0916 10:34:53.874326   17646 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:34:53.874335   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874346   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:34:53.874354   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874362   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874378   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:34:53.874392   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:34:53.874399   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874408   17646 command_runner.go:130] >       "size": "92733849",
	I0916 10:34:53.874416   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.874422   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874430   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874438   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874446   17646 command_runner.go:130] >     },
	I0916 10:34:53.874454   17646 command_runner.go:130] >     {
	I0916 10:34:53.874467   17646 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:34:53.874476   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874486   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:34:53.874495   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874503   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874541   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:34:53.874557   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:34:53.874564   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874573   17646 command_runner.go:130] >       "size": "68420934",
	I0916 10:34:53.874579   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874588   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.874597   17646 command_runner.go:130] >       },
	I0916 10:34:53.874606   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874621   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874629   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874636   17646 command_runner.go:130] >     },
	I0916 10:34:53.874642   17646 command_runner.go:130] >     {
	I0916 10:34:53.874654   17646 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:34:53.874662   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874673   17646 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:34:53.874681   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874691   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874704   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:34:53.874719   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:34:53.874728   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874738   17646 command_runner.go:130] >       "size": "742080",
	I0916 10:34:53.874747   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874756   17646 command_runner.go:130] >         "value": "65535"
	I0916 10:34:53.874763   17646 command_runner.go:130] >       },
	I0916 10:34:53.874769   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874789   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874798   17646 command_runner.go:130] >       "pinned": true
	I0916 10:34:53.874806   17646 command_runner.go:130] >     }
	I0916 10:34:53.874814   17646 command_runner.go:130] >   ]
	I0916 10:34:53.874822   17646 command_runner.go:130] > }
	I0916 10:34:53.875251   17646 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:34:53.875273   17646 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:34:53.875322   17646 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:34:53.908199   17646 command_runner.go:130] > {
	I0916 10:34:53.908224   17646 command_runner.go:130] >   "images": [
	I0916 10:34:53.908230   17646 command_runner.go:130] >     {
	I0916 10:34:53.908242   17646 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:34:53.908250   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908256   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:34:53.908260   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908264   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908272   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:34:53.908280   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:34:53.908283   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908288   17646 command_runner.go:130] >       "size": "87190579",
	I0916 10:34:53.908292   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.908296   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908306   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908314   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908320   17646 command_runner.go:130] >     },
	I0916 10:34:53.908329   17646 command_runner.go:130] >     {
	I0916 10:34:53.908339   17646 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:34:53.908345   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908353   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:34:53.908356   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908361   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908369   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:34:53.908378   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:34:53.908385   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908394   17646 command_runner.go:130] >       "size": "31470524",
	I0916 10:34:53.908403   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.908411   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908418   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908429   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908437   17646 command_runner.go:130] >     },
	I0916 10:34:53.908446   17646 command_runner.go:130] >     {
	I0916 10:34:53.908455   17646 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:34:53.908461   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908466   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:34:53.908474   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908483   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908499   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:34:53.908523   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:34:53.908533   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908539   17646 command_runner.go:130] >       "size": "63273227",
	I0916 10:34:53.908547   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.908551   17646 command_runner.go:130] >       "username": "nonroot",
	I0916 10:34:53.908560   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908569   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908578   17646 command_runner.go:130] >     },
	I0916 10:34:53.908584   17646 command_runner.go:130] >     {
	I0916 10:34:53.908594   17646 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:34:53.908603   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908623   17646 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:34:53.908631   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908636   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908646   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:34:53.908666   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:34:53.908675   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908684   17646 command_runner.go:130] >       "size": "149009664",
	I0916 10:34:53.908692   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.908703   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.908713   17646 command_runner.go:130] >       },
	I0916 10:34:53.908720   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908724   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908733   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908742   17646 command_runner.go:130] >     },
	I0916 10:34:53.908751   17646 command_runner.go:130] >     {
	I0916 10:34:53.908763   17646 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:34:53.908772   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908783   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:34:53.908791   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908803   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908812   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:34:53.908826   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:34:53.908835   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908844   17646 command_runner.go:130] >       "size": "95237600",
	I0916 10:34:53.908853   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.908862   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.908871   17646 command_runner.go:130] >       },
	I0916 10:34:53.908879   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908886   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908893   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908896   17646 command_runner.go:130] >     },
	I0916 10:34:53.908904   17646 command_runner.go:130] >     {
	I0916 10:34:53.908915   17646 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:34:53.908924   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908935   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:34:53.908947   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908956   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908971   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:34:53.908981   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:34:53.908986   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908996   17646 command_runner.go:130] >       "size": "89437508",
	I0916 10:34:53.909005   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.909014   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.909022   17646 command_runner.go:130] >       },
	I0916 10:34:53.909030   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909039   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909050   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.909058   17646 command_runner.go:130] >     },
	I0916 10:34:53.909062   17646 command_runner.go:130] >     {
	I0916 10:34:53.909072   17646 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:34:53.909082   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.909090   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:34:53.909098   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909105   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.909118   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:34:53.909145   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:34:53.909155   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909162   17646 command_runner.go:130] >       "size": "92733849",
	I0916 10:34:53.909171   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.909180   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909189   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909198   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.909204   17646 command_runner.go:130] >     },
	I0916 10:34:53.909208   17646 command_runner.go:130] >     {
	I0916 10:34:53.909220   17646 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:34:53.909230   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.909242   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:34:53.909251   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909260   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.909283   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:34:53.909293   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:34:53.909298   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909308   17646 command_runner.go:130] >       "size": "68420934",
	I0916 10:34:53.909314   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.909324   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.909330   17646 command_runner.go:130] >       },
	I0916 10:34:53.909339   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909345   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909354   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.909360   17646 command_runner.go:130] >     },
	I0916 10:34:53.909367   17646 command_runner.go:130] >     {
	I0916 10:34:53.909377   17646 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:34:53.909385   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.909395   17646 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:34:53.909405   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909414   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.909428   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:34:53.909442   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:34:53.909450   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909456   17646 command_runner.go:130] >       "size": "742080",
	I0916 10:34:53.909460   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.909464   17646 command_runner.go:130] >         "value": "65535"
	I0916 10:34:53.909472   17646 command_runner.go:130] >       },
	I0916 10:34:53.909478   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909487   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909497   17646 command_runner.go:130] >       "pinned": true
	I0916 10:34:53.909505   17646 command_runner.go:130] >     }
	I0916 10:34:53.909510   17646 command_runner.go:130] >   ]
	I0916 10:34:53.909518   17646 command_runner.go:130] > }
	I0916 10:34:53.909703   17646 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:34:53.909725   17646 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:34:53.909733   17646 kubeadm.go:934] updating node { 192.168.39.230 8441 v1.31.1 crio true true} ...
	I0916 10:34:53.909824   17646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:34:53.909888   17646 ssh_runner.go:195] Run: crio config
	I0916 10:34:53.943974   17646 command_runner.go:130] ! time="2024-09-16 10:34:53.935307763Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 10:34:53.949754   17646 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:34:53.955753   17646 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:34:53.955775   17646 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:34:53.955782   17646 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:34:53.955786   17646 command_runner.go:130] > #
	I0916 10:34:53.955792   17646 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:34:53.955800   17646 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:34:53.955806   17646 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:34:53.955814   17646 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:34:53.955818   17646 command_runner.go:130] > # reload'.
	I0916 10:34:53.955829   17646 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:34:53.955835   17646 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:34:53.955841   17646 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:34:53.955847   17646 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:34:53.955859   17646 command_runner.go:130] > [crio]
	I0916 10:34:53.955869   17646 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:34:53.955877   17646 command_runner.go:130] > # containers images, in this directory.
	I0916 10:34:53.955887   17646 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 10:34:53.955899   17646 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:34:53.955909   17646 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 10:34:53.955917   17646 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 10:34:53.955924   17646 command_runner.go:130] > # imagestore = ""
	I0916 10:34:53.955929   17646 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:34:53.955935   17646 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:34:53.955940   17646 command_runner.go:130] > storage_driver = "overlay"
	I0916 10:34:53.955946   17646 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:34:53.955954   17646 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:34:53.955958   17646 command_runner.go:130] > storage_option = [
	I0916 10:34:53.955965   17646 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 10:34:53.955968   17646 command_runner.go:130] > ]
	I0916 10:34:53.955974   17646 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:34:53.955982   17646 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:34:53.955986   17646 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:34:53.955994   17646 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:34:53.956000   17646 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:34:53.956006   17646 command_runner.go:130] > # always happen on a node reboot
	I0916 10:34:53.956011   17646 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:34:53.956022   17646 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:34:53.956027   17646 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:34:53.956035   17646 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:34:53.956042   17646 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 10:34:53.956051   17646 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:34:53.956061   17646 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:34:53.956067   17646 command_runner.go:130] > # internal_wipe = true
	I0916 10:34:53.956075   17646 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 10:34:53.956083   17646 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 10:34:53.956094   17646 command_runner.go:130] > # internal_repair = false
	I0916 10:34:53.956101   17646 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:34:53.956110   17646 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:34:53.956117   17646 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:34:53.956122   17646 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:34:53.956130   17646 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:34:53.956137   17646 command_runner.go:130] > [crio.api]
	I0916 10:34:53.956143   17646 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:34:53.956149   17646 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:34:53.956155   17646 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:34:53.956161   17646 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:34:53.956168   17646 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:34:53.956174   17646 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:34:53.956179   17646 command_runner.go:130] > # stream_port = "0"
	I0916 10:34:53.956186   17646 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:34:53.956190   17646 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:34:53.956198   17646 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:34:53.956203   17646 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:34:53.956209   17646 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:34:53.956217   17646 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:34:53.956223   17646 command_runner.go:130] > # minutes.
	I0916 10:34:53.956227   17646 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:34:53.956235   17646 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:34:53.956243   17646 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:34:53.956248   17646 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:34:53.956256   17646 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:34:53.956263   17646 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:34:53.956284   17646 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:34:53.956290   17646 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:34:53.956297   17646 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 10:34:53.956303   17646 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 10:34:53.956310   17646 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 10:34:53.956317   17646 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 10:34:53.956323   17646 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:34:53.956330   17646 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:34:53.956336   17646 command_runner.go:130] > [crio.runtime]
	I0916 10:34:53.956341   17646 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:34:53.956349   17646 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:34:53.956355   17646 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:34:53.956363   17646 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:34:53.956369   17646 command_runner.go:130] > # default_ulimits = [
	I0916 10:34:53.956372   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956380   17646 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:34:53.956386   17646 command_runner.go:130] > # no_pivot = false
	I0916 10:34:53.956391   17646 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:34:53.956399   17646 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:34:53.956406   17646 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:34:53.956414   17646 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:34:53.956420   17646 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:34:53.956427   17646 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:34:53.956433   17646 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 10:34:53.956438   17646 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:34:53.956446   17646 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:34:53.956450   17646 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:34:53.956458   17646 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:34:53.956466   17646 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:34:53.956472   17646 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:34:53.956478   17646 command_runner.go:130] > conmon_env = [
	I0916 10:34:53.956483   17646 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 10:34:53.956489   17646 command_runner.go:130] > ]
	I0916 10:34:53.956494   17646 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:34:53.956501   17646 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:34:53.956507   17646 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:34:53.956513   17646 command_runner.go:130] > # default_env = [
	I0916 10:34:53.956516   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956524   17646 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:34:53.956530   17646 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 10:34:53.956535   17646 command_runner.go:130] > # selinux = false
	I0916 10:34:53.956540   17646 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:34:53.956548   17646 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:34:53.956554   17646 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:34:53.956560   17646 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:34:53.956565   17646 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:34:53.956573   17646 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:34:53.956580   17646 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:34:53.956587   17646 command_runner.go:130] > # which might increase security.
	I0916 10:34:53.956591   17646 command_runner.go:130] > # This option is currently deprecated,
	I0916 10:34:53.956601   17646 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 10:34:53.956608   17646 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 10:34:53.956613   17646 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:34:53.956621   17646 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:34:53.956629   17646 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:34:53.956638   17646 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:34:53.956643   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.956648   17646 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:34:53.956654   17646 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:34:53.956660   17646 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:34:53.956664   17646 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:34:53.956673   17646 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 10:34:53.956679   17646 command_runner.go:130] > # blockio parameters.
	I0916 10:34:53.956683   17646 command_runner.go:130] > # blockio_reload = false
	I0916 10:34:53.956691   17646 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:34:53.956695   17646 command_runner.go:130] > # irqbalance daemon.
	I0916 10:34:53.956702   17646 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:34:53.956708   17646 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 10:34:53.956716   17646 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 10:34:53.956725   17646 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 10:34:53.956732   17646 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 10:34:53.956740   17646 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:34:53.956747   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.956751   17646 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:34:53.956759   17646 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:34:53.956764   17646 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:34:53.956804   17646 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:34:53.956816   17646 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:34:53.956822   17646 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:34:53.956828   17646 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:34:53.956834   17646 command_runner.go:130] > # will be added.
	I0916 10:34:53.956837   17646 command_runner.go:130] > # default_capabilities = [
	I0916 10:34:53.956843   17646 command_runner.go:130] > # 	"CHOWN",
	I0916 10:34:53.956847   17646 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:34:53.956853   17646 command_runner.go:130] > # 	"FSETID",
	I0916 10:34:53.956862   17646 command_runner.go:130] > # 	"FOWNER",
	I0916 10:34:53.956868   17646 command_runner.go:130] > # 	"SETGID",
	I0916 10:34:53.956872   17646 command_runner.go:130] > # 	"SETUID",
	I0916 10:34:53.956878   17646 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:34:53.956882   17646 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:34:53.956890   17646 command_runner.go:130] > # 	"KILL",
	I0916 10:34:53.956896   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956903   17646 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:34:53.956911   17646 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:34:53.956916   17646 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 10:34:53.956924   17646 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:34:53.956932   17646 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:34:53.956936   17646 command_runner.go:130] > default_sysctls = [
	I0916 10:34:53.956943   17646 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:34:53.956947   17646 command_runner.go:130] > ]
	I0916 10:34:53.956952   17646 command_runner.go:130] > # List of devices on the host that a
	I0916 10:34:53.956959   17646 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:34:53.956966   17646 command_runner.go:130] > # allowed_devices = [
	I0916 10:34:53.956971   17646 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:34:53.956976   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956981   17646 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:34:53.956990   17646 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:34:53.956997   17646 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:34:53.957003   17646 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:34:53.957009   17646 command_runner.go:130] > # additional_devices = [
	I0916 10:34:53.957013   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957020   17646 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:34:53.957024   17646 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:34:53.957030   17646 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:34:53.957034   17646 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:34:53.957039   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957045   17646 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:34:53.957052   17646 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:34:53.957057   17646 command_runner.go:130] > # Defaults to false.
	I0916 10:34:53.957062   17646 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:34:53.957070   17646 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:34:53.957078   17646 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:34:53.957082   17646 command_runner.go:130] > # hooks_dir = [
	I0916 10:34:53.957088   17646 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:34:53.957091   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957097   17646 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:34:53.957105   17646 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:34:53.957111   17646 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:34:53.957116   17646 command_runner.go:130] > #
	I0916 10:34:53.957131   17646 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:34:53.957140   17646 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:34:53.957148   17646 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:34:53.957152   17646 command_runner.go:130] > #
	I0916 10:34:53.957158   17646 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:34:53.957166   17646 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:34:53.957174   17646 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:34:53.957180   17646 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:34:53.957185   17646 command_runner.go:130] > #
	I0916 10:34:53.957190   17646 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:34:53.957197   17646 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:34:53.957203   17646 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:34:53.957210   17646 command_runner.go:130] > pids_limit = 1024
	I0916 10:34:53.957217   17646 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:34:53.957225   17646 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:34:53.957232   17646 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:34:53.957242   17646 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:34:53.957248   17646 command_runner.go:130] > # log_size_max = -1
	I0916 10:34:53.957254   17646 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 10:34:53.957260   17646 command_runner.go:130] > # log_to_journald = false
	I0916 10:34:53.957267   17646 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:34:53.957273   17646 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:34:53.957278   17646 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:34:53.957285   17646 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:34:53.957291   17646 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:34:53.957297   17646 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:34:53.957303   17646 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:34:53.957308   17646 command_runner.go:130] > # read_only = false
	I0916 10:34:53.957314   17646 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:34:53.957322   17646 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:34:53.957328   17646 command_runner.go:130] > # live configuration reload.
	I0916 10:34:53.957333   17646 command_runner.go:130] > # log_level = "info"
	I0916 10:34:53.957340   17646 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:34:53.957344   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.957350   17646 command_runner.go:130] > # log_filter = ""
	I0916 10:34:53.957357   17646 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:34:53.957366   17646 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:34:53.957373   17646 command_runner.go:130] > # separated by comma.
	I0916 10:34:53.957381   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957389   17646 command_runner.go:130] > # uid_mappings = ""
	I0916 10:34:53.957395   17646 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:34:53.957403   17646 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:34:53.957414   17646 command_runner.go:130] > # separated by comma.
	I0916 10:34:53.957423   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957429   17646 command_runner.go:130] > # gid_mappings = ""
	I0916 10:34:53.957435   17646 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:34:53.957443   17646 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:34:53.957449   17646 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:34:53.957459   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957465   17646 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:34:53.957471   17646 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:34:53.957479   17646 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:34:53.957485   17646 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:34:53.957494   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957500   17646 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:34:53.957506   17646 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:34:53.957513   17646 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:34:53.957521   17646 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:34:53.957525   17646 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:34:53.957532   17646 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:34:53.957538   17646 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:34:53.957542   17646 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:34:53.957546   17646 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:34:53.957552   17646 command_runner.go:130] > drop_infra_ctr = false
	I0916 10:34:53.957558   17646 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:34:53.957573   17646 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:34:53.957585   17646 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:34:53.957591   17646 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:34:53.957599   17646 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 10:34:53.957607   17646 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 10:34:53.957613   17646 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 10:34:53.957620   17646 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 10:34:53.957624   17646 command_runner.go:130] > # shared_cpuset = ""
	I0916 10:34:53.957632   17646 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:34:53.957643   17646 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:34:53.957650   17646 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:34:53.957656   17646 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:34:53.957662   17646 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 10:34:53.957668   17646 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 10:34:53.957676   17646 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 10:34:53.957683   17646 command_runner.go:130] > # enable_criu_support = false
	I0916 10:34:53.957688   17646 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 10:34:53.957696   17646 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 10:34:53.957702   17646 command_runner.go:130] > # enable_pod_events = false
	I0916 10:34:53.957708   17646 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:34:53.957716   17646 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:34:53.957724   17646 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 10:34:53.957728   17646 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:34:53.957735   17646 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:34:53.957742   17646 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:34:53.957753   17646 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 10:34:53.957760   17646 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:34:53.957768   17646 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:34:53.957775   17646 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:34:53.957779   17646 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:34:53.957785   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957791   17646 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:34:53.957800   17646 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:34:53.957807   17646 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 10:34:53.957812   17646 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 10:34:53.957817   17646 command_runner.go:130] > #
	I0916 10:34:53.957822   17646 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 10:34:53.957827   17646 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 10:34:53.957862   17646 command_runner.go:130] > # runtime_type = "oci"
	I0916 10:34:53.957870   17646 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 10:34:53.957875   17646 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 10:34:53.957879   17646 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 10:34:53.957883   17646 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 10:34:53.957886   17646 command_runner.go:130] > # monitor_env = []
	I0916 10:34:53.957891   17646 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 10:34:53.957897   17646 command_runner.go:130] > # allowed_annotations = []
	I0916 10:34:53.957902   17646 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 10:34:53.957910   17646 command_runner.go:130] > # Where:
	I0916 10:34:53.957916   17646 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 10:34:53.957925   17646 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 10:34:53.957933   17646 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:34:53.957941   17646 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:34:53.957947   17646 command_runner.go:130] > #   in $PATH.
	I0916 10:34:53.957953   17646 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 10:34:53.957960   17646 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:34:53.957966   17646 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 10:34:53.957971   17646 command_runner.go:130] > #   state.
	I0916 10:34:53.957977   17646 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:34:53.957985   17646 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:34:53.957991   17646 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:34:53.957999   17646 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:34:53.958007   17646 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:34:53.958015   17646 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:34:53.958022   17646 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:34:53.958028   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:34:53.958038   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:34:53.958046   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:34:53.958053   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:34:53.958062   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:34:53.958071   17646 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:34:53.958078   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 10:34:53.958086   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 10:34:53.958092   17646 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:34:53.958099   17646 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 10:34:53.958104   17646 command_runner.go:130] > #   deprecated option "conmon".
	I0916 10:34:53.958112   17646 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 10:34:53.958118   17646 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 10:34:53.958124   17646 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 10:34:53.958131   17646 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:34:53.958138   17646 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 10:34:53.958146   17646 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 10:34:53.958155   17646 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 10:34:53.958160   17646 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 10:34:53.958165   17646 command_runner.go:130] > #
	I0916 10:34:53.958170   17646 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 10:34:53.958175   17646 command_runner.go:130] > #
	I0916 10:34:53.958181   17646 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 10:34:53.958189   17646 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 10:34:53.958195   17646 command_runner.go:130] > #
	I0916 10:34:53.958201   17646 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 10:34:53.958209   17646 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 10:34:53.958214   17646 command_runner.go:130] > #
	I0916 10:34:53.958220   17646 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 10:34:53.958225   17646 command_runner.go:130] > # feature.
	I0916 10:34:53.958228   17646 command_runner.go:130] > #
	I0916 10:34:53.958235   17646 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 10:34:53.958242   17646 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 10:34:53.958248   17646 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 10:34:53.958256   17646 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 10:34:53.958263   17646 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 10:34:53.958268   17646 command_runner.go:130] > #
	I0916 10:34:53.958274   17646 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 10:34:53.958282   17646 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 10:34:53.958287   17646 command_runner.go:130] > #
	I0916 10:34:53.958293   17646 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 10:34:53.958300   17646 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 10:34:53.958306   17646 command_runner.go:130] > #
	I0916 10:34:53.958311   17646 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 10:34:53.958320   17646 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 10:34:53.958323   17646 command_runner.go:130] > # limitation.
	I0916 10:34:53.958330   17646 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:34:53.958334   17646 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 10:34:53.958340   17646 command_runner.go:130] > runtime_type = "oci"
	I0916 10:34:53.958345   17646 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:34:53.958350   17646 command_runner.go:130] > runtime_config_path = ""
	I0916 10:34:53.958355   17646 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 10:34:53.958361   17646 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 10:34:53.958365   17646 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:34:53.958371   17646 command_runner.go:130] > monitor_env = [
	I0916 10:34:53.958377   17646 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 10:34:53.958382   17646 command_runner.go:130] > ]
	I0916 10:34:53.958386   17646 command_runner.go:130] > privileged_without_host_devices = false
	I0916 10:34:53.958397   17646 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:34:53.958405   17646 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:34:53.958413   17646 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:34:53.958421   17646 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:34:53.958430   17646 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:34:53.958437   17646 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:34:53.958446   17646 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:34:53.958455   17646 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:34:53.958463   17646 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:34:53.958472   17646 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:34:53.958478   17646 command_runner.go:130] > # Example:
	I0916 10:34:53.958482   17646 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:34:53.958489   17646 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:34:53.958496   17646 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:34:53.958503   17646 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:34:53.958507   17646 command_runner.go:130] > # cpuset = 0
	I0916 10:34:53.958513   17646 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:34:53.958517   17646 command_runner.go:130] > # Where:
	I0916 10:34:53.958523   17646 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:34:53.958530   17646 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:34:53.958537   17646 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:34:53.958542   17646 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:34:53.958549   17646 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:34:53.958558   17646 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:34:53.958562   17646 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 10:34:53.958569   17646 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 10:34:53.958573   17646 command_runner.go:130] > # Default value is set to true
	I0916 10:34:53.958577   17646 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 10:34:53.958582   17646 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 10:34:53.958586   17646 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 10:34:53.958590   17646 command_runner.go:130] > # Default value is set to 'false'
	I0916 10:34:53.958593   17646 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 10:34:53.958599   17646 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:34:53.958602   17646 command_runner.go:130] > #
	I0916 10:34:53.958607   17646 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:34:53.958615   17646 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:34:53.958621   17646 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:34:53.958626   17646 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:34:53.958631   17646 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:34:53.958634   17646 command_runner.go:130] > [crio.image]
	I0916 10:34:53.958640   17646 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:34:53.958644   17646 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:34:53.958649   17646 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:34:53.958655   17646 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:34:53.958659   17646 command_runner.go:130] > # global_auth_file = ""
	I0916 10:34:53.958664   17646 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:34:53.958670   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.958676   17646 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:34:53.958682   17646 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:34:53.958690   17646 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:34:53.958695   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.958701   17646 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:34:53.958706   17646 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:34:53.958714   17646 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:34:53.958720   17646 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:34:53.958728   17646 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:34:53.958734   17646 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:34:53.958740   17646 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 10:34:53.958748   17646 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 10:34:53.958753   17646 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 10:34:53.958759   17646 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 10:34:53.958767   17646 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 10:34:53.958776   17646 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 10:34:53.958780   17646 command_runner.go:130] > # pinned_images = [
	I0916 10:34:53.958784   17646 command_runner.go:130] > # ]
	I0916 10:34:53.958793   17646 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:34:53.958801   17646 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:34:53.958809   17646 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:34:53.958816   17646 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:34:53.958823   17646 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:34:53.958829   17646 command_runner.go:130] > # signature_policy = ""
	I0916 10:34:53.958836   17646 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 10:34:53.958843   17646 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 10:34:53.958851   17646 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 10:34:53.958861   17646 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 10:34:53.958867   17646 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 10:34:53.958871   17646 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 10:34:53.958877   17646 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:34:53.958883   17646 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:34:53.958887   17646 command_runner.go:130] > # changing them here.
	I0916 10:34:53.958891   17646 command_runner.go:130] > # insecure_registries = [
	I0916 10:34:53.958894   17646 command_runner.go:130] > # ]
	I0916 10:34:53.958901   17646 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:34:53.958905   17646 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:34:53.958909   17646 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:34:53.958913   17646 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:34:53.958917   17646 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:34:53.958923   17646 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:34:53.958926   17646 command_runner.go:130] > # CNI plugins.
	I0916 10:34:53.958930   17646 command_runner.go:130] > [crio.network]
	I0916 10:34:53.958935   17646 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:34:53.958940   17646 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:34:53.958944   17646 command_runner.go:130] > # cni_default_network = ""
	I0916 10:34:53.958949   17646 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:34:53.958953   17646 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:34:53.958958   17646 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:34:53.958961   17646 command_runner.go:130] > # plugin_dirs = [
	I0916 10:34:53.958964   17646 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:34:53.958968   17646 command_runner.go:130] > # ]
	I0916 10:34:53.958973   17646 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:34:53.958976   17646 command_runner.go:130] > [crio.metrics]
	I0916 10:34:53.958980   17646 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:34:53.958984   17646 command_runner.go:130] > enable_metrics = true
	I0916 10:34:53.958988   17646 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:34:53.958992   17646 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:34:53.958998   17646 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:34:53.959004   17646 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:34:53.959009   17646 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:34:53.959013   17646 command_runner.go:130] > # metrics_collectors = [
	I0916 10:34:53.959016   17646 command_runner.go:130] > # 	"operations",
	I0916 10:34:53.959023   17646 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:34:53.959030   17646 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:34:53.959035   17646 command_runner.go:130] > # 	"operations_errors",
	I0916 10:34:53.959041   17646 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:34:53.959046   17646 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:34:53.959052   17646 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:34:53.959056   17646 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:34:53.959062   17646 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:34:53.959066   17646 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:34:53.959073   17646 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:34:53.959078   17646 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 10:34:53.959084   17646 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:34:53.959088   17646 command_runner.go:130] > # 	"containers_oom",
	I0916 10:34:53.959094   17646 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:34:53.959097   17646 command_runner.go:130] > # 	"operations_total",
	I0916 10:34:53.959102   17646 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:34:53.959108   17646 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:34:53.959113   17646 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:34:53.959119   17646 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:34:53.959124   17646 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:34:53.959130   17646 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:34:53.959134   17646 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:34:53.959140   17646 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:34:53.959145   17646 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:34:53.959151   17646 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 10:34:53.959156   17646 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 10:34:53.959160   17646 command_runner.go:130] > # ]
	I0916 10:34:53.959165   17646 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:34:53.959171   17646 command_runner.go:130] > # metrics_port = 9090
	I0916 10:34:53.959175   17646 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:34:53.959181   17646 command_runner.go:130] > # metrics_socket = ""
	I0916 10:34:53.959186   17646 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:34:53.959194   17646 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:34:53.959202   17646 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:34:53.959209   17646 command_runner.go:130] > # certificate on any modification event.
	I0916 10:34:53.959214   17646 command_runner.go:130] > # metrics_cert = ""
	I0916 10:34:53.959221   17646 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:34:53.959228   17646 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:34:53.959232   17646 command_runner.go:130] > # metrics_key = ""
	I0916 10:34:53.959240   17646 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:34:53.959243   17646 command_runner.go:130] > [crio.tracing]
	I0916 10:34:53.959250   17646 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:34:53.959256   17646 command_runner.go:130] > # enable_tracing = false
	I0916 10:34:53.959261   17646 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:34:53.959268   17646 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:34:53.959274   17646 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 10:34:53.959282   17646 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:34:53.959287   17646 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 10:34:53.959290   17646 command_runner.go:130] > [crio.nri]
	I0916 10:34:53.959294   17646 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 10:34:53.959300   17646 command_runner.go:130] > # enable_nri = false
	I0916 10:34:53.959304   17646 command_runner.go:130] > # NRI socket to listen on.
	I0916 10:34:53.959311   17646 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 10:34:53.959315   17646 command_runner.go:130] > # NRI plugin directory to use.
	I0916 10:34:53.959322   17646 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 10:34:53.959327   17646 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 10:34:53.959334   17646 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 10:34:53.959339   17646 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 10:34:53.959345   17646 command_runner.go:130] > # nri_disable_connections = false
	I0916 10:34:53.959350   17646 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 10:34:53.959357   17646 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 10:34:53.959362   17646 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 10:34:53.959368   17646 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 10:34:53.959373   17646 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:34:53.959380   17646 command_runner.go:130] > [crio.stats]
	I0916 10:34:53.959385   17646 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:34:53.959392   17646 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:34:53.959397   17646 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:34:53.959484   17646 cni.go:84] Creating CNI manager for ""
	I0916 10:34:53.959498   17646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:34:53.959505   17646 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:34:53.959524   17646 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553844 NodeName:functional-553844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:34:53.959634   17646 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553844"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:34:53.959689   17646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:34:53.969814   17646 command_runner.go:130] > kubeadm
	I0916 10:34:53.969837   17646 command_runner.go:130] > kubectl
	I0916 10:34:53.969841   17646 command_runner.go:130] > kubelet
	I0916 10:34:53.969861   17646 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:34:53.969900   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:34:53.979269   17646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:34:53.995958   17646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:34:54.012835   17646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0916 10:34:54.028998   17646 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0916 10:34:54.032749   17646 command_runner.go:130] > 192.168.39.230	control-plane.minikube.internal
	I0916 10:34:54.032827   17646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:54.161068   17646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:34:54.176070   17646 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844 for IP: 192.168.39.230
	I0916 10:34:54.176090   17646 certs.go:194] generating shared ca certs ...
	I0916 10:34:54.176110   17646 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:34:54.176254   17646 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:34:54.176317   17646 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:34:54.176330   17646 certs.go:256] generating profile certs ...
	I0916 10:34:54.176420   17646 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.key
	I0916 10:34:54.176512   17646 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key.7b9f73b3
	I0916 10:34:54.176593   17646 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key
	I0916 10:34:54.176607   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:34:54.176628   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:34:54.176648   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:34:54.176667   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:34:54.176685   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:34:54.176705   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:34:54.176723   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:34:54.176741   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:34:54.176801   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:34:54.176839   17646 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:34:54.176854   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:34:54.176889   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:34:54.176922   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:34:54.176954   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:34:54.177008   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:34:54.177047   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.177066   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.177084   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.177619   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:34:54.201622   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:34:54.224717   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:34:54.248747   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:34:54.272001   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:34:54.295257   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:34:54.318394   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:34:54.341470   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:34:54.364947   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:34:54.388405   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:34:54.411730   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:34:54.434855   17646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:34:54.451644   17646 ssh_runner.go:195] Run: openssl version
	I0916 10:34:54.457529   17646 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 10:34:54.457603   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:34:54.468568   17646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.473071   17646 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.473146   17646 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.473200   17646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.478979   17646 command_runner.go:130] > 3ec20f2e
	I0916 10:34:54.479053   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:34:54.489001   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:34:54.500128   17646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.504474   17646 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.504658   17646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.504709   17646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.510639   17646 command_runner.go:130] > b5213941
	I0916 10:34:54.510799   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:34:54.520662   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:34:54.535566   17646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.551885   17646 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.551929   17646 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.551989   17646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.614475   17646 command_runner.go:130] > 51391683
	I0916 10:34:54.614580   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:34:54.712068   17646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:34:54.725729   17646 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:34:54.725769   17646 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:34:54.725780   17646 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0916 10:34:54.725790   17646 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:34:54.725801   17646 command_runner.go:130] > Access: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.725811   17646 command_runner.go:130] > Modify: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.725822   17646 command_runner.go:130] > Change: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.725835   17646 command_runner.go:130] >  Birth: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.730463   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:34:54.781875   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.782236   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:34:54.799129   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.799393   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:34:54.828763   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.828862   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:34:54.888492   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.888578   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:34:54.915347   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.915973   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:34:54.930753   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.930839   17646 kubeadm.go:392] StartCluster: {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:54.930964   17646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:34:54.931040   17646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:34:55.206723   17646 command_runner.go:130] > 29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e
	I0916 10:34:55.206750   17646 command_runner.go:130] > 0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866
	I0916 10:34:55.206762   17646 command_runner.go:130] > e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621
	I0916 10:34:55.206768   17646 command_runner.go:130] > 665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a
	I0916 10:34:55.206774   17646 command_runner.go:130] > 84f3fbe9bc0e50f69d1a350e13463be07e27d165bbc881a004c0f0f48f00d581
	I0916 10:34:55.206779   17646 command_runner.go:130] > 5449e3e53c664617d9083167551c07e0692164390fe890faa6c2acf448711d41
	I0916 10:34:55.206784   17646 command_runner.go:130] > baf4cdc69419d6532efbce0cbe3f72712e6252baabc945ce9b974815304046ba
	I0916 10:34:55.206792   17646 command_runner.go:130] > 84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515
	I0916 10:34:55.210089   17646 cri.go:89] found id: "29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	I0916 10:34:55.210113   17646 cri.go:89] found id: "0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866"
	I0916 10:34:55.210119   17646 cri.go:89] found id: "e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621"
	I0916 10:34:55.210124   17646 cri.go:89] found id: "665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a"
	I0916 10:34:55.210128   17646 cri.go:89] found id: "84f3fbe9bc0e50f69d1a350e13463be07e27d165bbc881a004c0f0f48f00d581"
	I0916 10:34:55.210134   17646 cri.go:89] found id: "5449e3e53c664617d9083167551c07e0692164390fe890faa6c2acf448711d41"
	I0916 10:34:55.210138   17646 cri.go:89] found id: "baf4cdc69419d6532efbce0cbe3f72712e6252baabc945ce9b974815304046ba"
	I0916 10:34:55.210141   17646 cri.go:89] found id: "84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515"
	I0916 10:34:55.210145   17646 cri.go:89] found id: ""
	I0916 10:34:55.210194   17646 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 10:34:55.279740   17646 command_runner.go:130] ! load container 11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02: container does not exist
	I0916 10:34:55.311240   17646 command_runner.go:130] ! load container 5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb: container does not exist
	I0916 10:34:55.354559   17646 command_runner.go:130] ! load container dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a: container does not exist
	
	
	==> CRI-O <==
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.166450214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482932166425986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdffa919-c9cb-4e25-878a-42fade9ac77c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.166860950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=078c0e58-63b6-4d0a-aa5f-928699bd7b46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.166939157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=078c0e58-63b6-4d0a-aa5f-928699bd7b46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.167306793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=078c0e58-63b6-4d0a-aa5f-928699bd7b46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.208848211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d493ae9a-5323-4b59-8f3a-686c6077746c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.208940911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d493ae9a-5323-4b59-8f3a-686c6077746c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.210672760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7efc9db8-f4fd-4a54-8d85-f75846d62da7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.211127339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482932211100000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7efc9db8-f4fd-4a54-8d85-f75846d62da7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.211921895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=135b34dc-df47-4ae7-a182-240b4fe7b8a3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.211990769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=135b34dc-df47-4ae7-a182-240b4fe7b8a3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.212387850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=135b34dc-df47-4ae7-a182-240b4fe7b8a3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.255408995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d461526-190e-4d5e-8c98-f6b6c3976269 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.255500492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d461526-190e-4d5e-8c98-f6b6c3976269 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.256798490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51aa5e98-9e4f-4845-a1aa-6030a2d50a98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.257247259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482932257224263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51aa5e98-9e4f-4845-a1aa-6030a2d50a98 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.257692052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6b80ba3-e244-40ac-96bd-412ef803da4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.257764989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6b80ba3-e244-40ac-96bd-412ef803da4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.258130696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6b80ba3-e244-40ac-96bd-412ef803da4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.290129240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f51317c3-f8a5-46bd-8f24-39037b3d0277 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.290200066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f51317c3-f8a5-46bd-8f24-39037b3d0277 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.295650007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28622947-e67c-4b66-9b6f-18d03cb517be name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.296019050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482932295997585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28622947-e67c-4b66-9b6f-18d03cb517be name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.296684607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6e80d17-9a14-4050-86d5-2a6137659aae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.296748454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6e80d17-9a14-4050-86d5-2a6137659aae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:32 functional-553844 crio[2242]: time="2024-09-16 10:35:32.297096391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6e80d17-9a14-4050-86d5-2a6137659aae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   24 seconds ago       Running             kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   24 seconds ago       Running             kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   24 seconds ago       Running             kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago       Running             coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   37 seconds ago       Running             storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   37 seconds ago       Running             kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   37 seconds ago       Running             etcd                      1                   b212b903ed97c       etcd-functional-553844
	3e06948fb7d78       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   37 seconds ago       Exited              kube-controller-manager   1                   786e02c9f268f       kube-controller-manager-functional-553844
	a3fe318aca7e7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   37 seconds ago       Exited              kube-apiserver            1                   f630bd7b31a99       kube-apiserver-functional-553844
	29f56fdf2e13c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   37 seconds ago       Exited              kube-scheduler            1                   224c8313d2a4b       kube-scheduler-functional-553844
	0718da2983026       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       0                   b2f2f51ddb95b       storage-provisioner
	e2067f72690f6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   1d959480e7123       coredns-7c65d6cfc9-ntnpc
	665e5ce6ab7a5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   53f5b7dda8360       kube-proxy-8d5zp
	84edb04959b2d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   f4b841c3fa189       etcd-functional-553844
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	
	
	==> coredns [e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44555 - 37636 "HINFO IN 1428552004750772321.6386749862655392797. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.155382227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     64s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         69s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s                kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeReady                68s                kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           65s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[Sep16 10:34] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.060120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061281] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.192979] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.124868] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.273205] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.973731] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.437848] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.066860] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.492744] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.076580] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.721276] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.603762] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	
	
	==> etcd [84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515] <==
	{"level":"info","ts":"2024-09-16T10:34:19.704561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:19.704668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:19.706149Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.707329Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:19.707491Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:19.707836Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:19.708720Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:19.709583Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:19.710343Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.707961Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.710492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.710531Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.710611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.708779Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:19.710874Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:39.151449Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:39.151612Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:39.151734Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:39.151823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:39.218449Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:39.218489Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:39.218574Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:34:39.417180Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:39.417312Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:39.417337Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:55.917354Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","added-peer-id":"f4acae94ef986412","added-peer-peer-urls":["https://192.168.39.230:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:55.917463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:55.917506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:55.918704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:55.932222Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:55.933659Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:55.933712Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:55.933947Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:55.936086Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:56.955096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> kernel <==
	 10:35:32 up 1 min,  0 users,  load average: 0.25, 0.10, 0.04
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539] <==
	I0916 10:34:58.362657       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0916 10:34:58.362698       1 secure_serving.go:258] Stopped listening on [::]:8441
	I0916 10:34:58.362728       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:34:58.363145       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:34:58.369146       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0916 10:34:58.389365       1 controller.go:157] Shutting down quota evaluator
	I0916 10:34:58.389399       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390157       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390251       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390276       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390282       1 controller.go:176] quota evaluator worker shutdown
	E0916 10:34:59.144838       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.144899       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	W0916 10:35:00.144926       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:00.145224       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.145011       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:01.145158       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.145262       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:02.145467       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:03.145393       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:03.145608       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:04.145258       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:04.145649       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:05.144740       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:05.145003       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.817642       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c] <==
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120935       1 shared_informer.go:320] Caches are synced for expand
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:34:29.358558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:34:29.370671       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:34:29.370729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:29.497786       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:34:29.497892       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:34:29.497969       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:29.504350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:29.504625       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:29.504656       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:29.512871       1 config.go:199] "Starting service config controller"
	I0916 10:34:29.512919       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:29.512969       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:29.512973       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:29.513465       1 config.go:328] "Starting node config controller"
	I0916 10:34:29.513501       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:29.613132       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:34:29.613189       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:29.615147       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e] <==
	I0916 10:34:56.127606       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:58.216123       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:58.216271       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:58.216330       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:58.216338       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:58.329214       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:58.329252       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:58.339781       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:58.339820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:58.341161       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:58.339879       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:58.441945       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:05.904806       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:05.904973       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0916 10:35:05.905193       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.645293    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cf351cdb4e05fb19a16881fc8f9a8bc-usr-share-ca-certificates\") pod \"kube-apiserver-functional-553844\" (UID: \"0cf351cdb4e05fb19a16881fc8f9a8bc\") " pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.645315    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba1ce2146f556353256cee766fb22aa-k8s-certs\") pod \"kube-controller-manager-functional-553844\" (UID: \"0ba1ce2146f556353256cee766fb22aa\") " pod="kube-system/kube-controller-manager-functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.645334    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e9406d783b81f1f83bb9b03dd50757a-kubeconfig\") pod \"kube-scheduler-functional-553844\" (UID: \"8e9406d783b81f1f83bb9b03dd50757a\") " pod="kube-system/kube-scheduler-functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.788377    3203 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: E0916 10:35:07.789378    3203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.230:8441: connect: connection refused" node="functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.840020    3203 scope.go:117] "RemoveContainer" containerID="a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.840577    3203 scope.go:117] "RemoveContainer" containerID="3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.842114    3203 scope.go:117] "RemoveContainer" containerID="29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	Sep 16 10:35:08 functional-553844 kubelet[3203]: E0916 10:35:08.005440    3203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-553844?timeout=10s\": dial tcp 192.168.39.230:8441: connect: connection refused" interval="800ms"
	Sep 16 10:35:08 functional-553844 kubelet[3203]: I0916 10:35:08.191646    3203 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.876113    3203 kubelet_node_status.go:111] "Node was previously registered" node="functional-553844"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.876237    3203 kubelet_node_status.go:75] "Successfully registered node" node="functional-553844"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.876264    3203 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.877661    3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: E0916 10:35:10.910901    3203 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-functional-553844\" already exists" pod="kube-system/etcd-functional-553844"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.386721    3203 apiserver.go:52] "Watching apiserver"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.413817    3203 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.477402    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.477604    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.477696    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: E0916 10:35:11.564093    3203 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-functional-553844\" already exists" pod="kube-system/etcd-functional-553844"
	Sep 16 10:35:17 functional-553844 kubelet[3203]: E0916 10:35:17.490489    3203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482917487632453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:17 functional-553844 kubelet[3203]: E0916 10:35:17.490529    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482917487632453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:27 functional-553844 kubelet[3203]: E0916 10:35:27.494110    3203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482927493621389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:27 functional-553844 kubelet[3203]: E0916 10:35:27.494139    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482927493621389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866] <==
	I0916 10:34:29.683223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:35:31.842224   17920 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (508.43µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubeContext (2.02s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-553844 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-553844 get po -A: fork/exec /usr/local/bin/kubectl: exec format error (357.353µs)
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-553844 get po -A" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-553844 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (1.37889674s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | addons-001438 addons disable   | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:26 UTC | 16 Sep 24 10:27 UTC |
	|         | helm-tiller --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:27 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | addons-001438 addons           | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:31 UTC |
	|         | disable metrics-server         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop    | -p addons-001438               | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:31 UTC | 16 Sep 24 10:32 UTC |
	| addons  | enable dashboard -p            | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-001438                  |                   |         |         |                     |                     |
	| delete  | -p addons-001438               | addons-001438     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| start   | -p nospam-263701 -n=1          | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-263701   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC |                     |
	|         | /tmp/nospam-263701 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC |                     |
	|         | /tmp/nospam-263701 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC |                     |
	|         | /tmp/nospam-263701 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 pause       |                   |         |         |                     |                     |
	| pause   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 pause       |                   |         |         |                     |                     |
	| pause   | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 pause       |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop        |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop        |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir        | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-263701               | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	| start   | -p functional-553844           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:34 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-553844           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:35 UTC |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:34:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:34:38.077439   17646 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:34:38.077542   17646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:38.077549   17646 out.go:358] Setting ErrFile to fd 2...
	I0916 10:34:38.077553   17646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:38.077744   17646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:34:38.078240   17646 out.go:352] Setting JSON to false
	I0916 10:34:38.079125   17646 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1028,"bootTime":1726481850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:34:38.079218   17646 start.go:139] virtualization: kvm guest
	I0916 10:34:38.081269   17646 out.go:177] * [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:34:38.082653   17646 notify.go:220] Checking for updates...
	I0916 10:34:38.082693   17646 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:34:38.084064   17646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:34:38.085453   17646 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:34:38.086964   17646 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:34:38.088245   17646 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:34:38.089480   17646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:34:38.091189   17646 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:38.091271   17646 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:34:38.091718   17646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:34:38.091758   17646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:34:38.106583   17646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0916 10:34:38.107005   17646 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:34:38.107759   17646 main.go:141] libmachine: Using API Version  1
	I0916 10:34:38.107779   17646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:34:38.108182   17646 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:34:38.108417   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:38.143506   17646 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:34:38.144858   17646 start.go:297] selected driver: kvm2
	I0916 10:34:38.144879   17646 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:38.144991   17646 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:34:38.145360   17646 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:34:38.145438   17646 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:34:38.160331   17646 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:34:38.160977   17646 cni.go:84] Creating CNI manager for ""
	I0916 10:34:38.161032   17646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:34:38.161088   17646 start.go:340] cluster config:
	{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:38.161230   17646 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:34:38.163098   17646 out.go:177] * Starting "functional-553844" primary control-plane node in "functional-553844" cluster
	I0916 10:34:38.164351   17646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:34:38.164388   17646 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:34:38.164398   17646 cache.go:56] Caching tarball of preloaded images
	I0916 10:34:38.164466   17646 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:34:38.164475   17646 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:34:38.164556   17646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/config.json ...
	I0916 10:34:38.164739   17646 start.go:360] acquireMachinesLock for functional-553844: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:34:38.164779   17646 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "functional-553844"
	I0916 10:34:38.164792   17646 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:34:38.164799   17646 fix.go:54] fixHost starting: 
	I0916 10:34:38.165073   17646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:34:38.165103   17646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:34:38.179236   17646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0916 10:34:38.179758   17646 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:34:38.180227   17646 main.go:141] libmachine: Using API Version  1
	I0916 10:34:38.180247   17646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:34:38.180560   17646 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:34:38.180709   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:38.180847   17646 main.go:141] libmachine: (functional-553844) Calling .GetState
	I0916 10:34:38.182307   17646 fix.go:112] recreateIfNeeded on functional-553844: state=Running err=<nil>
	W0916 10:34:38.182334   17646 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:34:38.184116   17646 out.go:177] * Updating the running kvm2 "functional-553844" VM ...
	I0916 10:34:38.185307   17646 machine.go:93] provisionDockerMachine start ...
	I0916 10:34:38.185326   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:38.185506   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.187626   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.187927   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.187950   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.188086   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.188251   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.188405   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.188519   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.188671   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:38.188843   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:38.188857   17646 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:34:38.297498   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:34:38.297530   17646 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:34:38.297794   17646 buildroot.go:166] provisioning hostname "functional-553844"
	I0916 10:34:38.297825   17646 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:34:38.298016   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.300725   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.301057   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.301088   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.301225   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.301390   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.301552   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.301675   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.301825   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:38.301989   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:38.302001   17646 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-553844 && echo "functional-553844" | sudo tee /etc/hostname
	I0916 10:34:38.424960   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:34:38.424988   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.427581   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.427896   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.427924   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.428065   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.428258   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.428366   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.428491   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.428669   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:38.428884   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:38.428907   17646 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553844/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:34:38.538121   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:34:38.538155   17646 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:34:38.538194   17646 buildroot.go:174] setting up certificates
	I0916 10:34:38.538205   17646 provision.go:84] configureAuth start
	I0916 10:34:38.538215   17646 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:34:38.538466   17646 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:34:38.540938   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.541247   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.541278   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.541369   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.543545   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.543884   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.543925   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.544016   17646 provision.go:143] copyHostCerts
	I0916 10:34:38.544046   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:34:38.544079   17646 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:34:38.544093   17646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:34:38.544168   17646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:34:38.544277   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:34:38.544302   17646 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:34:38.544310   17646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:34:38.544335   17646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:34:38.544406   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:34:38.544429   17646 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:34:38.544438   17646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:34:38.544470   17646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:34:38.544547   17646 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.functional-553844 san=[127.0.0.1 192.168.39.230 functional-553844 localhost minikube]
	I0916 10:34:38.847217   17646 provision.go:177] copyRemoteCerts
	I0916 10:34:38.847294   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:34:38.847346   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:38.849820   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.850114   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:38.850141   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:38.850337   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:38.850521   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:38.850686   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:38.850821   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:38.936570   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:34:38.936641   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:34:38.965490   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:34:38.965558   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:34:38.994515   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:34:38.994585   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:34:39.023350   17646 provision.go:87] duration metric: took 485.133127ms to configureAuth
	I0916 10:34:39.023373   17646 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:34:39.023521   17646 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:34:39.023586   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:39.026305   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:39.026605   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:39.026634   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:39.026800   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:39.026979   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:39.027126   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:39.027207   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:39.027331   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:39.027485   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:39.027502   17646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:34:44.559214   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:34:44.559244   17646 machine.go:96] duration metric: took 6.373924238s to provisionDockerMachine
	I0916 10:34:44.559258   17646 start.go:293] postStartSetup for "functional-553844" (driver="kvm2")
	I0916 10:34:44.559271   17646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:34:44.559293   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.559630   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:34:44.559656   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.562588   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.562954   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.562985   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.563239   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.563424   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.563606   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.563780   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:44.648160   17646 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:34:44.652463   17646 command_runner.go:130] > NAME=Buildroot
	I0916 10:34:44.652481   17646 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 10:34:44.652485   17646 command_runner.go:130] > ID=buildroot
	I0916 10:34:44.652490   17646 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 10:34:44.652497   17646 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 10:34:44.652658   17646 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:34:44.652680   17646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:34:44.652777   17646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:34:44.652876   17646 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:34:44.652886   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:34:44.652968   17646 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> hosts in /etc/test/nested/copy/11203
	I0916 10:34:44.652978   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> /etc/test/nested/copy/11203/hosts
	I0916 10:34:44.653023   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11203
	I0916 10:34:44.662633   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:34:44.687556   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts --> /etc/test/nested/copy/11203/hosts (40 bytes)
	I0916 10:34:44.710968   17646 start.go:296] duration metric: took 151.696977ms for postStartSetup
	I0916 10:34:44.711001   17646 fix.go:56] duration metric: took 6.546202275s for fixHost
	I0916 10:34:44.711032   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.713557   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.713866   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.713899   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.714055   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.714240   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.714371   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.714476   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.714621   17646 main.go:141] libmachine: Using SSH client type: native
	I0916 10:34:44.714829   17646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:34:44.714840   17646 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:34:44.821900   17646 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482884.813574839
	
	I0916 10:34:44.821921   17646 fix.go:216] guest clock: 1726482884.813574839
	I0916 10:34:44.821928   17646 fix.go:229] Guest: 2024-09-16 10:34:44.813574839 +0000 UTC Remote: 2024-09-16 10:34:44.711005113 +0000 UTC m=+6.670369347 (delta=102.569726ms)
	I0916 10:34:44.821964   17646 fix.go:200] guest clock delta is within tolerance: 102.569726ms
	I0916 10:34:44.821973   17646 start.go:83] releasing machines lock for "functional-553844", held for 6.657185342s
	I0916 10:34:44.821994   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.822279   17646 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:34:44.825000   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.825343   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.825372   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.825505   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.825984   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.826163   17646 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:34:44.826218   17646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:34:44.826272   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.826336   17646 ssh_runner.go:195] Run: cat /version.json
	I0916 10:34:44.826360   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:34:44.828843   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.828894   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.829188   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.829217   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.829338   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:44.829349   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.829364   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:44.829517   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.829527   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:34:44.829649   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.829707   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:34:44.829787   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:44.829810   17646 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:34:44.829933   17646 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:34:44.905672   17646 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 10:34:44.905864   17646 ssh_runner.go:195] Run: systemctl --version
	I0916 10:34:44.930168   17646 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:34:44.930247   17646 command_runner.go:130] > systemd 252 (252)
	I0916 10:34:44.930279   17646 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 10:34:44.930332   17646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:34:45.078495   17646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:34:45.086261   17646 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 10:34:45.086307   17646 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:34:45.086372   17646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:34:45.095896   17646 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:34:45.095914   17646 start.go:495] detecting cgroup driver to use...
	I0916 10:34:45.095972   17646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:34:45.111929   17646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:34:45.126331   17646 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:34:45.126393   17646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:34:45.140856   17646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:34:45.155306   17646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:34:45.287963   17646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:34:45.419203   17646 docker.go:233] disabling docker service ...
	I0916 10:34:45.419281   17646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:34:45.436187   17646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:34:45.450036   17646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:34:45.606742   17646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:34:45.749840   17646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:34:45.764656   17646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:34:45.783532   17646 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 10:34:45.783584   17646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:34:45.783631   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.794960   17646 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:34:45.795027   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.806657   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.817937   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.828872   17646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:34:45.839918   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.851537   17646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.862100   17646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:34:45.873482   17646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:34:45.883775   17646 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:34:45.883842   17646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:34:45.893484   17646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:46.025442   17646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:34:53.718838   17646 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.69335782s)
	I0916 10:34:53.718869   17646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:34:53.718910   17646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:34:53.723871   17646 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 10:34:53.723895   17646 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:34:53.723904   17646 command_runner.go:130] > Device: 0,22	Inode: 1215        Links: 1
	I0916 10:34:53.723913   17646 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:34:53.723921   17646 command_runner.go:130] > Access: 2024-09-16 10:34:53.691572356 +0000
	I0916 10:34:53.723930   17646 command_runner.go:130] > Modify: 2024-09-16 10:34:53.596569598 +0000
	I0916 10:34:53.723940   17646 command_runner.go:130] > Change: 2024-09-16 10:34:53.596569598 +0000
	I0916 10:34:53.723948   17646 command_runner.go:130] >  Birth: -
	I0916 10:34:53.724041   17646 start.go:563] Will wait 60s for crictl version
	I0916 10:34:53.724100   17646 ssh_runner.go:195] Run: which crictl
	I0916 10:34:53.727843   17646 command_runner.go:130] > /usr/bin/crictl
	I0916 10:34:53.727908   17646 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:34:53.762394   17646 command_runner.go:130] > Version:  0.1.0
	I0916 10:34:53.762417   17646 command_runner.go:130] > RuntimeName:  cri-o
	I0916 10:34:53.762424   17646 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 10:34:53.762432   17646 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:34:53.763582   17646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:34:53.763652   17646 ssh_runner.go:195] Run: crio --version
	I0916 10:34:53.791280   17646 command_runner.go:130] > crio version 1.29.1
	I0916 10:34:53.791299   17646 command_runner.go:130] > Version:        1.29.1
	I0916 10:34:53.791308   17646 command_runner.go:130] > GitCommit:      unknown
	I0916 10:34:53.791313   17646 command_runner.go:130] > GitCommitDate:  unknown
	I0916 10:34:53.791318   17646 command_runner.go:130] > GitTreeState:   clean
	I0916 10:34:53.791326   17646 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 10:34:53.791332   17646 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 10:34:53.791338   17646 command_runner.go:130] > Compiler:       gc
	I0916 10:34:53.791346   17646 command_runner.go:130] > Platform:       linux/amd64
	I0916 10:34:53.791353   17646 command_runner.go:130] > Linkmode:       dynamic
	I0916 10:34:53.791370   17646 command_runner.go:130] > BuildTags:      
	I0916 10:34:53.791380   17646 command_runner.go:130] >   containers_image_ostree_stub
	I0916 10:34:53.791388   17646 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 10:34:53.791394   17646 command_runner.go:130] >   btrfs_noversion
	I0916 10:34:53.791404   17646 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 10:34:53.791412   17646 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 10:34:53.791420   17646 command_runner.go:130] >   seccomp
	I0916 10:34:53.791428   17646 command_runner.go:130] > LDFlags:          unknown
	I0916 10:34:53.791436   17646 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:34:53.791443   17646 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:34:53.792548   17646 ssh_runner.go:195] Run: crio --version
	I0916 10:34:53.819305   17646 command_runner.go:130] > crio version 1.29.1
	I0916 10:34:53.819321   17646 command_runner.go:130] > Version:        1.29.1
	I0916 10:34:53.819329   17646 command_runner.go:130] > GitCommit:      unknown
	I0916 10:34:53.819335   17646 command_runner.go:130] > GitCommitDate:  unknown
	I0916 10:34:53.819341   17646 command_runner.go:130] > GitTreeState:   clean
	I0916 10:34:53.819348   17646 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 10:34:53.819355   17646 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 10:34:53.819362   17646 command_runner.go:130] > Compiler:       gc
	I0916 10:34:53.819371   17646 command_runner.go:130] > Platform:       linux/amd64
	I0916 10:34:53.819380   17646 command_runner.go:130] > Linkmode:       dynamic
	I0916 10:34:53.819390   17646 command_runner.go:130] > BuildTags:      
	I0916 10:34:53.819400   17646 command_runner.go:130] >   containers_image_ostree_stub
	I0916 10:34:53.819411   17646 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 10:34:53.819419   17646 command_runner.go:130] >   btrfs_noversion
	I0916 10:34:53.819430   17646 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 10:34:53.819440   17646 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 10:34:53.819447   17646 command_runner.go:130] >   seccomp
	I0916 10:34:53.819456   17646 command_runner.go:130] > LDFlags:          unknown
	I0916 10:34:53.819464   17646 command_runner.go:130] > SeccompEnabled:   true
	I0916 10:34:53.819473   17646 command_runner.go:130] > AppArmorEnabled:  false
	I0916 10:34:53.822587   17646 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:34:53.823899   17646 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:34:53.826566   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:53.826950   17646 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:34:53.826979   17646 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:34:53.827150   17646 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:34:53.831424   17646 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 10:34:53.831646   17646 kubeadm.go:883] updating cluster {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:34:53.831762   17646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:34:53.831807   17646 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:34:53.873326   17646 command_runner.go:130] > {
	I0916 10:34:53.873355   17646 command_runner.go:130] >   "images": [
	I0916 10:34:53.873361   17646 command_runner.go:130] >     {
	I0916 10:34:53.873373   17646 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:34:53.873381   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873392   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:34:53.873398   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873405   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873418   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:34:53.873468   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:34:53.873480   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873486   17646 command_runner.go:130] >       "size": "87190579",
	I0916 10:34:53.873493   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.873503   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.873514   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873522   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873530   17646 command_runner.go:130] >     },
	I0916 10:34:53.873535   17646 command_runner.go:130] >     {
	I0916 10:34:53.873547   17646 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:34:53.873557   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873567   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:34:53.873574   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873584   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873600   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:34:53.873624   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:34:53.873634   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873644   17646 command_runner.go:130] >       "size": "31470524",
	I0916 10:34:53.873653   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.873663   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.873672   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873683   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873692   17646 command_runner.go:130] >     },
	I0916 10:34:53.873699   17646 command_runner.go:130] >     {
	I0916 10:34:53.873709   17646 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:34:53.873718   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873727   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:34:53.873735   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873741   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873758   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:34:53.873772   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:34:53.873779   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873788   17646 command_runner.go:130] >       "size": "63273227",
	I0916 10:34:53.873795   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.873804   17646 command_runner.go:130] >       "username": "nonroot",
	I0916 10:34:53.873812   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873822   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873830   17646 command_runner.go:130] >     },
	I0916 10:34:53.873835   17646 command_runner.go:130] >     {
	I0916 10:34:53.873846   17646 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:34:53.873855   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.873865   17646 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:34:53.873873   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873881   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.873891   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:34:53.873907   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:34:53.873915   17646 command_runner.go:130] >       ],
	I0916 10:34:53.873921   17646 command_runner.go:130] >       "size": "149009664",
	I0916 10:34:53.873930   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.873939   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.873947   17646 command_runner.go:130] >       },
	I0916 10:34:53.873955   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.873964   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.873974   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.873980   17646 command_runner.go:130] >     },
	I0916 10:34:53.873989   17646 command_runner.go:130] >     {
	I0916 10:34:53.874000   17646 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:34:53.874010   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874021   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:34:53.874030   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874039   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874054   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:34:53.874076   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:34:53.874085   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874093   17646 command_runner.go:130] >       "size": "95237600",
	I0916 10:34:53.874100   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874107   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.874115   17646 command_runner.go:130] >       },
	I0916 10:34:53.874121   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874130   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874140   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874149   17646 command_runner.go:130] >     },
	I0916 10:34:53.874157   17646 command_runner.go:130] >     {
	I0916 10:34:53.874166   17646 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:34:53.874174   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874184   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:34:53.874192   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874201   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874217   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:34:53.874233   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:34:53.874242   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874251   17646 command_runner.go:130] >       "size": "89437508",
	I0916 10:34:53.874258   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874265   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.874272   17646 command_runner.go:130] >       },
	I0916 10:34:53.874281   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874289   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874299   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874307   17646 command_runner.go:130] >     },
	I0916 10:34:53.874314   17646 command_runner.go:130] >     {
	I0916 10:34:53.874326   17646 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:34:53.874335   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874346   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:34:53.874354   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874362   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874378   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:34:53.874392   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:34:53.874399   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874408   17646 command_runner.go:130] >       "size": "92733849",
	I0916 10:34:53.874416   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.874422   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874430   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874438   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874446   17646 command_runner.go:130] >     },
	I0916 10:34:53.874454   17646 command_runner.go:130] >     {
	I0916 10:34:53.874467   17646 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:34:53.874476   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874486   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:34:53.874495   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874503   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874541   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:34:53.874557   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:34:53.874564   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874573   17646 command_runner.go:130] >       "size": "68420934",
	I0916 10:34:53.874579   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874588   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.874597   17646 command_runner.go:130] >       },
	I0916 10:34:53.874606   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874621   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874629   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.874636   17646 command_runner.go:130] >     },
	I0916 10:34:53.874642   17646 command_runner.go:130] >     {
	I0916 10:34:53.874654   17646 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:34:53.874662   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.874673   17646 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:34:53.874681   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874691   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.874704   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:34:53.874719   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:34:53.874728   17646 command_runner.go:130] >       ],
	I0916 10:34:53.874738   17646 command_runner.go:130] >       "size": "742080",
	I0916 10:34:53.874747   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.874756   17646 command_runner.go:130] >         "value": "65535"
	I0916 10:34:53.874763   17646 command_runner.go:130] >       },
	I0916 10:34:53.874769   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.874789   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.874798   17646 command_runner.go:130] >       "pinned": true
	I0916 10:34:53.874806   17646 command_runner.go:130] >     }
	I0916 10:34:53.874814   17646 command_runner.go:130] >   ]
	I0916 10:34:53.874822   17646 command_runner.go:130] > }
	I0916 10:34:53.875251   17646 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:34:53.875273   17646 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:34:53.875322   17646 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:34:53.908199   17646 command_runner.go:130] > {
	I0916 10:34:53.908224   17646 command_runner.go:130] >   "images": [
	I0916 10:34:53.908230   17646 command_runner.go:130] >     {
	I0916 10:34:53.908242   17646 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:34:53.908250   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908256   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:34:53.908260   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908264   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908272   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 10:34:53.908280   17646 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:34:53.908283   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908288   17646 command_runner.go:130] >       "size": "87190579",
	I0916 10:34:53.908292   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.908296   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908306   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908314   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908320   17646 command_runner.go:130] >     },
	I0916 10:34:53.908329   17646 command_runner.go:130] >     {
	I0916 10:34:53.908339   17646 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:34:53.908345   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908353   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:34:53.908356   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908361   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908369   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 10:34:53.908378   17646 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 10:34:53.908385   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908394   17646 command_runner.go:130] >       "size": "31470524",
	I0916 10:34:53.908403   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.908411   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908418   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908429   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908437   17646 command_runner.go:130] >     },
	I0916 10:34:53.908446   17646 command_runner.go:130] >     {
	I0916 10:34:53.908455   17646 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:34:53.908461   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908466   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:34:53.908474   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908483   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908499   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 10:34:53.908523   17646 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 10:34:53.908533   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908539   17646 command_runner.go:130] >       "size": "63273227",
	I0916 10:34:53.908547   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.908551   17646 command_runner.go:130] >       "username": "nonroot",
	I0916 10:34:53.908560   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908569   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908578   17646 command_runner.go:130] >     },
	I0916 10:34:53.908584   17646 command_runner.go:130] >     {
	I0916 10:34:53.908594   17646 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:34:53.908603   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908623   17646 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:34:53.908631   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908636   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908646   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 10:34:53.908666   17646 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:34:53.908675   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908684   17646 command_runner.go:130] >       "size": "149009664",
	I0916 10:34:53.908692   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.908703   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.908713   17646 command_runner.go:130] >       },
	I0916 10:34:53.908720   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908724   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908733   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908742   17646 command_runner.go:130] >     },
	I0916 10:34:53.908751   17646 command_runner.go:130] >     {
	I0916 10:34:53.908763   17646 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:34:53.908772   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908783   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:34:53.908791   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908803   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908812   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 10:34:53.908826   17646 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:34:53.908835   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908844   17646 command_runner.go:130] >       "size": "95237600",
	I0916 10:34:53.908853   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.908862   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.908871   17646 command_runner.go:130] >       },
	I0916 10:34:53.908879   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.908886   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.908893   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.908896   17646 command_runner.go:130] >     },
	I0916 10:34:53.908904   17646 command_runner.go:130] >     {
	I0916 10:34:53.908915   17646 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:34:53.908924   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.908935   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:34:53.908947   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908956   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.908971   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 10:34:53.908981   17646 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 10:34:53.908986   17646 command_runner.go:130] >       ],
	I0916 10:34:53.908996   17646 command_runner.go:130] >       "size": "89437508",
	I0916 10:34:53.909005   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.909014   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.909022   17646 command_runner.go:130] >       },
	I0916 10:34:53.909030   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909039   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909050   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.909058   17646 command_runner.go:130] >     },
	I0916 10:34:53.909062   17646 command_runner.go:130] >     {
	I0916 10:34:53.909072   17646 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:34:53.909082   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.909090   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:34:53.909098   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909105   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.909118   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 10:34:53.909145   17646 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 10:34:53.909155   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909162   17646 command_runner.go:130] >       "size": "92733849",
	I0916 10:34:53.909171   17646 command_runner.go:130] >       "uid": null,
	I0916 10:34:53.909180   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909189   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909198   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.909204   17646 command_runner.go:130] >     },
	I0916 10:34:53.909208   17646 command_runner.go:130] >     {
	I0916 10:34:53.909220   17646 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:34:53.909230   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.909242   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:34:53.909251   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909260   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.909283   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 10:34:53.909293   17646 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 10:34:53.909298   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909308   17646 command_runner.go:130] >       "size": "68420934",
	I0916 10:34:53.909314   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.909324   17646 command_runner.go:130] >         "value": "0"
	I0916 10:34:53.909330   17646 command_runner.go:130] >       },
	I0916 10:34:53.909339   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909345   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909354   17646 command_runner.go:130] >       "pinned": false
	I0916 10:34:53.909360   17646 command_runner.go:130] >     },
	I0916 10:34:53.909367   17646 command_runner.go:130] >     {
	I0916 10:34:53.909377   17646 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:34:53.909385   17646 command_runner.go:130] >       "repoTags": [
	I0916 10:34:53.909395   17646 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:34:53.909405   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909414   17646 command_runner.go:130] >       "repoDigests": [
	I0916 10:34:53.909428   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 10:34:53.909442   17646 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:34:53.909450   17646 command_runner.go:130] >       ],
	I0916 10:34:53.909456   17646 command_runner.go:130] >       "size": "742080",
	I0916 10:34:53.909460   17646 command_runner.go:130] >       "uid": {
	I0916 10:34:53.909464   17646 command_runner.go:130] >         "value": "65535"
	I0916 10:34:53.909472   17646 command_runner.go:130] >       },
	I0916 10:34:53.909478   17646 command_runner.go:130] >       "username": "",
	I0916 10:34:53.909487   17646 command_runner.go:130] >       "spec": null,
	I0916 10:34:53.909497   17646 command_runner.go:130] >       "pinned": true
	I0916 10:34:53.909505   17646 command_runner.go:130] >     }
	I0916 10:34:53.909510   17646 command_runner.go:130] >   ]
	I0916 10:34:53.909518   17646 command_runner.go:130] > }
	I0916 10:34:53.909703   17646 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:34:53.909725   17646 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:34:53.909733   17646 kubeadm.go:934] updating node { 192.168.39.230 8441 v1.31.1 crio true true} ...
	I0916 10:34:53.909824   17646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:34:53.909888   17646 ssh_runner.go:195] Run: crio config
	I0916 10:34:53.943974   17646 command_runner.go:130] ! time="2024-09-16 10:34:53.935307763Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 10:34:53.949754   17646 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 10:34:53.955753   17646 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 10:34:53.955775   17646 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 10:34:53.955782   17646 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 10:34:53.955786   17646 command_runner.go:130] > #
	I0916 10:34:53.955792   17646 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 10:34:53.955800   17646 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 10:34:53.955806   17646 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 10:34:53.955814   17646 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 10:34:53.955818   17646 command_runner.go:130] > # reload'.
	I0916 10:34:53.955829   17646 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 10:34:53.955835   17646 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 10:34:53.955841   17646 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 10:34:53.955847   17646 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 10:34:53.955859   17646 command_runner.go:130] > [crio]
	I0916 10:34:53.955869   17646 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 10:34:53.955877   17646 command_runner.go:130] > # containers images, in this directory.
	I0916 10:34:53.955887   17646 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 10:34:53.955899   17646 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 10:34:53.955909   17646 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 10:34:53.955917   17646 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 10:34:53.955924   17646 command_runner.go:130] > # imagestore = ""
	I0916 10:34:53.955929   17646 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 10:34:53.955935   17646 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 10:34:53.955940   17646 command_runner.go:130] > storage_driver = "overlay"
	I0916 10:34:53.955946   17646 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 10:34:53.955954   17646 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 10:34:53.955958   17646 command_runner.go:130] > storage_option = [
	I0916 10:34:53.955965   17646 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 10:34:53.955968   17646 command_runner.go:130] > ]
	I0916 10:34:53.955974   17646 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 10:34:53.955982   17646 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 10:34:53.955986   17646 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 10:34:53.955994   17646 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 10:34:53.956000   17646 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 10:34:53.956006   17646 command_runner.go:130] > # always happen on a node reboot
	I0916 10:34:53.956011   17646 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 10:34:53.956022   17646 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 10:34:53.956027   17646 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 10:34:53.956035   17646 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 10:34:53.956042   17646 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 10:34:53.956051   17646 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 10:34:53.956061   17646 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 10:34:53.956067   17646 command_runner.go:130] > # internal_wipe = true
	I0916 10:34:53.956075   17646 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 10:34:53.956083   17646 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 10:34:53.956094   17646 command_runner.go:130] > # internal_repair = false
	I0916 10:34:53.956101   17646 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 10:34:53.956110   17646 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 10:34:53.956117   17646 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 10:34:53.956122   17646 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 10:34:53.956130   17646 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 10:34:53.956137   17646 command_runner.go:130] > [crio.api]
	I0916 10:34:53.956143   17646 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 10:34:53.956149   17646 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 10:34:53.956155   17646 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 10:34:53.956161   17646 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 10:34:53.956168   17646 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 10:34:53.956174   17646 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 10:34:53.956179   17646 command_runner.go:130] > # stream_port = "0"
	I0916 10:34:53.956186   17646 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 10:34:53.956190   17646 command_runner.go:130] > # stream_enable_tls = false
	I0916 10:34:53.956198   17646 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 10:34:53.956203   17646 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 10:34:53.956209   17646 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 10:34:53.956217   17646 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 10:34:53.956223   17646 command_runner.go:130] > # minutes.
	I0916 10:34:53.956227   17646 command_runner.go:130] > # stream_tls_cert = ""
	I0916 10:34:53.956235   17646 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 10:34:53.956243   17646 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 10:34:53.956248   17646 command_runner.go:130] > # stream_tls_key = ""
	I0916 10:34:53.956256   17646 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 10:34:53.956263   17646 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 10:34:53.956284   17646 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 10:34:53.956290   17646 command_runner.go:130] > # stream_tls_ca = ""
	I0916 10:34:53.956297   17646 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 10:34:53.956303   17646 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 10:34:53.956310   17646 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 10:34:53.956317   17646 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 10:34:53.956323   17646 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 10:34:53.956330   17646 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 10:34:53.956336   17646 command_runner.go:130] > [crio.runtime]
	I0916 10:34:53.956341   17646 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 10:34:53.956349   17646 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 10:34:53.956355   17646 command_runner.go:130] > # "nofile=1024:2048"
	I0916 10:34:53.956363   17646 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 10:34:53.956369   17646 command_runner.go:130] > # default_ulimits = [
	I0916 10:34:53.956372   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956380   17646 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 10:34:53.956386   17646 command_runner.go:130] > # no_pivot = false
	I0916 10:34:53.956391   17646 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 10:34:53.956399   17646 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 10:34:53.956406   17646 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 10:34:53.956414   17646 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 10:34:53.956420   17646 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 10:34:53.956427   17646 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:34:53.956433   17646 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 10:34:53.956438   17646 command_runner.go:130] > # Cgroup setting for conmon
	I0916 10:34:53.956446   17646 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 10:34:53.956450   17646 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 10:34:53.956458   17646 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 10:34:53.956466   17646 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 10:34:53.956472   17646 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 10:34:53.956478   17646 command_runner.go:130] > conmon_env = [
	I0916 10:34:53.956483   17646 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 10:34:53.956489   17646 command_runner.go:130] > ]
	I0916 10:34:53.956494   17646 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 10:34:53.956501   17646 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 10:34:53.956507   17646 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 10:34:53.956513   17646 command_runner.go:130] > # default_env = [
	I0916 10:34:53.956516   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956524   17646 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 10:34:53.956530   17646 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 10:34:53.956535   17646 command_runner.go:130] > # selinux = false
	I0916 10:34:53.956540   17646 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 10:34:53.956548   17646 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 10:34:53.956554   17646 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 10:34:53.956560   17646 command_runner.go:130] > # seccomp_profile = ""
	I0916 10:34:53.956565   17646 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 10:34:53.956573   17646 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 10:34:53.956580   17646 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 10:34:53.956587   17646 command_runner.go:130] > # which might increase security.
	I0916 10:34:53.956591   17646 command_runner.go:130] > # This option is currently deprecated,
	I0916 10:34:53.956601   17646 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 10:34:53.956608   17646 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 10:34:53.956613   17646 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 10:34:53.956621   17646 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 10:34:53.956629   17646 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 10:34:53.956638   17646 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 10:34:53.956643   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.956648   17646 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 10:34:53.956654   17646 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 10:34:53.956660   17646 command_runner.go:130] > # the cgroup blockio controller.
	I0916 10:34:53.956664   17646 command_runner.go:130] > # blockio_config_file = ""
	I0916 10:34:53.956673   17646 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 10:34:53.956679   17646 command_runner.go:130] > # blockio parameters.
	I0916 10:34:53.956683   17646 command_runner.go:130] > # blockio_reload = false
	I0916 10:34:53.956691   17646 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 10:34:53.956695   17646 command_runner.go:130] > # irqbalance daemon.
	I0916 10:34:53.956702   17646 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 10:34:53.956708   17646 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 10:34:53.956716   17646 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 10:34:53.956725   17646 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 10:34:53.956732   17646 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 10:34:53.956740   17646 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 10:34:53.956747   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.956751   17646 command_runner.go:130] > # rdt_config_file = ""
	I0916 10:34:53.956759   17646 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 10:34:53.956764   17646 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 10:34:53.956804   17646 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 10:34:53.956816   17646 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 10:34:53.956822   17646 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 10:34:53.956828   17646 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 10:34:53.956834   17646 command_runner.go:130] > # will be added.
	I0916 10:34:53.956837   17646 command_runner.go:130] > # default_capabilities = [
	I0916 10:34:53.956843   17646 command_runner.go:130] > # 	"CHOWN",
	I0916 10:34:53.956847   17646 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 10:34:53.956853   17646 command_runner.go:130] > # 	"FSETID",
	I0916 10:34:53.956862   17646 command_runner.go:130] > # 	"FOWNER",
	I0916 10:34:53.956868   17646 command_runner.go:130] > # 	"SETGID",
	I0916 10:34:53.956872   17646 command_runner.go:130] > # 	"SETUID",
	I0916 10:34:53.956878   17646 command_runner.go:130] > # 	"SETPCAP",
	I0916 10:34:53.956882   17646 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 10:34:53.956890   17646 command_runner.go:130] > # 	"KILL",
	I0916 10:34:53.956896   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956903   17646 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 10:34:53.956911   17646 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 10:34:53.956916   17646 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 10:34:53.956924   17646 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 10:34:53.956932   17646 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:34:53.956936   17646 command_runner.go:130] > default_sysctls = [
	I0916 10:34:53.956943   17646 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 10:34:53.956947   17646 command_runner.go:130] > ]
	I0916 10:34:53.956952   17646 command_runner.go:130] > # List of devices on the host that a
	I0916 10:34:53.956959   17646 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 10:34:53.956966   17646 command_runner.go:130] > # allowed_devices = [
	I0916 10:34:53.956971   17646 command_runner.go:130] > # 	"/dev/fuse",
	I0916 10:34:53.956976   17646 command_runner.go:130] > # ]
	I0916 10:34:53.956981   17646 command_runner.go:130] > # List of additional devices. specified as
	I0916 10:34:53.956990   17646 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 10:34:53.956997   17646 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 10:34:53.957003   17646 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 10:34:53.957009   17646 command_runner.go:130] > # additional_devices = [
	I0916 10:34:53.957013   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957020   17646 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 10:34:53.957024   17646 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 10:34:53.957030   17646 command_runner.go:130] > # 	"/etc/cdi",
	I0916 10:34:53.957034   17646 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 10:34:53.957039   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957045   17646 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 10:34:53.957052   17646 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 10:34:53.957057   17646 command_runner.go:130] > # Defaults to false.
	I0916 10:34:53.957062   17646 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 10:34:53.957070   17646 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 10:34:53.957078   17646 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 10:34:53.957082   17646 command_runner.go:130] > # hooks_dir = [
	I0916 10:34:53.957088   17646 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 10:34:53.957091   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957097   17646 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 10:34:53.957105   17646 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 10:34:53.957111   17646 command_runner.go:130] > # its default mounts from the following two files:
	I0916 10:34:53.957116   17646 command_runner.go:130] > #
	I0916 10:34:53.957131   17646 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 10:34:53.957140   17646 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 10:34:53.957148   17646 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 10:34:53.957152   17646 command_runner.go:130] > #
	I0916 10:34:53.957158   17646 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 10:34:53.957166   17646 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 10:34:53.957174   17646 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 10:34:53.957180   17646 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 10:34:53.957185   17646 command_runner.go:130] > #
	I0916 10:34:53.957190   17646 command_runner.go:130] > # default_mounts_file = ""
	I0916 10:34:53.957197   17646 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 10:34:53.957203   17646 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 10:34:53.957210   17646 command_runner.go:130] > pids_limit = 1024
	I0916 10:34:53.957217   17646 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 10:34:53.957225   17646 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 10:34:53.957232   17646 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 10:34:53.957242   17646 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 10:34:53.957248   17646 command_runner.go:130] > # log_size_max = -1
	I0916 10:34:53.957254   17646 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 10:34:53.957260   17646 command_runner.go:130] > # log_to_journald = false
	I0916 10:34:53.957267   17646 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 10:34:53.957273   17646 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 10:34:53.957278   17646 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 10:34:53.957285   17646 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 10:34:53.957291   17646 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 10:34:53.957297   17646 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 10:34:53.957303   17646 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 10:34:53.957308   17646 command_runner.go:130] > # read_only = false
	I0916 10:34:53.957314   17646 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 10:34:53.957322   17646 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 10:34:53.957328   17646 command_runner.go:130] > # live configuration reload.
	I0916 10:34:53.957333   17646 command_runner.go:130] > # log_level = "info"
	I0916 10:34:53.957340   17646 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 10:34:53.957344   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.957350   17646 command_runner.go:130] > # log_filter = ""
	I0916 10:34:53.957357   17646 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 10:34:53.957366   17646 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 10:34:53.957373   17646 command_runner.go:130] > # separated by comma.
	I0916 10:34:53.957381   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957389   17646 command_runner.go:130] > # uid_mappings = ""
	I0916 10:34:53.957395   17646 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 10:34:53.957403   17646 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 10:34:53.957414   17646 command_runner.go:130] > # separated by comma.
	I0916 10:34:53.957423   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957429   17646 command_runner.go:130] > # gid_mappings = ""
	I0916 10:34:53.957435   17646 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 10:34:53.957443   17646 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:34:53.957449   17646 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:34:53.957459   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957465   17646 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 10:34:53.957471   17646 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 10:34:53.957479   17646 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 10:34:53.957485   17646 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 10:34:53.957494   17646 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 10:34:53.957500   17646 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 10:34:53.957506   17646 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 10:34:53.957513   17646 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 10:34:53.957521   17646 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 10:34:53.957525   17646 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 10:34:53.957532   17646 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 10:34:53.957538   17646 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 10:34:53.957542   17646 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 10:34:53.957546   17646 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 10:34:53.957552   17646 command_runner.go:130] > drop_infra_ctr = false
	I0916 10:34:53.957558   17646 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 10:34:53.957573   17646 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 10:34:53.957585   17646 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 10:34:53.957591   17646 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 10:34:53.957599   17646 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 10:34:53.957607   17646 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 10:34:53.957613   17646 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 10:34:53.957620   17646 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 10:34:53.957624   17646 command_runner.go:130] > # shared_cpuset = ""
	I0916 10:34:53.957632   17646 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 10:34:53.957643   17646 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 10:34:53.957650   17646 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 10:34:53.957656   17646 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 10:34:53.957662   17646 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 10:34:53.957668   17646 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 10:34:53.957676   17646 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 10:34:53.957683   17646 command_runner.go:130] > # enable_criu_support = false
	I0916 10:34:53.957688   17646 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 10:34:53.957696   17646 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 10:34:53.957702   17646 command_runner.go:130] > # enable_pod_events = false
	I0916 10:34:53.957708   17646 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:34:53.957716   17646 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 10:34:53.957724   17646 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 10:34:53.957728   17646 command_runner.go:130] > # default_runtime = "runc"
	I0916 10:34:53.957735   17646 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 10:34:53.957742   17646 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 10:34:53.957753   17646 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 10:34:53.957760   17646 command_runner.go:130] > # creation as a file is not desired either.
	I0916 10:34:53.957768   17646 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 10:34:53.957775   17646 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 10:34:53.957779   17646 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 10:34:53.957785   17646 command_runner.go:130] > # ]
	I0916 10:34:53.957791   17646 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 10:34:53.957800   17646 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 10:34:53.957807   17646 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 10:34:53.957812   17646 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 10:34:53.957817   17646 command_runner.go:130] > #
	I0916 10:34:53.957822   17646 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 10:34:53.957827   17646 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 10:34:53.957862   17646 command_runner.go:130] > # runtime_type = "oci"
	I0916 10:34:53.957870   17646 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 10:34:53.957875   17646 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 10:34:53.957879   17646 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 10:34:53.957883   17646 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 10:34:53.957886   17646 command_runner.go:130] > # monitor_env = []
	I0916 10:34:53.957891   17646 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 10:34:53.957897   17646 command_runner.go:130] > # allowed_annotations = []
	I0916 10:34:53.957902   17646 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 10:34:53.957910   17646 command_runner.go:130] > # Where:
	I0916 10:34:53.957916   17646 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 10:34:53.957925   17646 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 10:34:53.957933   17646 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 10:34:53.957941   17646 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 10:34:53.957947   17646 command_runner.go:130] > #   in $PATH.
	I0916 10:34:53.957953   17646 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 10:34:53.957960   17646 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 10:34:53.957966   17646 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 10:34:53.957971   17646 command_runner.go:130] > #   state.
	I0916 10:34:53.957977   17646 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 10:34:53.957985   17646 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 10:34:53.957991   17646 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 10:34:53.957999   17646 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 10:34:53.958007   17646 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 10:34:53.958015   17646 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 10:34:53.958022   17646 command_runner.go:130] > #   The currently recognized values are:
	I0916 10:34:53.958028   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 10:34:53.958038   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 10:34:53.958046   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 10:34:53.958053   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 10:34:53.958062   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 10:34:53.958071   17646 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 10:34:53.958078   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 10:34:53.958086   17646 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 10:34:53.958092   17646 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 10:34:53.958099   17646 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 10:34:53.958104   17646 command_runner.go:130] > #   deprecated option "conmon".
	I0916 10:34:53.958112   17646 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 10:34:53.958118   17646 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 10:34:53.958124   17646 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 10:34:53.958131   17646 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 10:34:53.958138   17646 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 10:34:53.958146   17646 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 10:34:53.958155   17646 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 10:34:53.958160   17646 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 10:34:53.958165   17646 command_runner.go:130] > #
	I0916 10:34:53.958170   17646 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 10:34:53.958175   17646 command_runner.go:130] > #
	I0916 10:34:53.958181   17646 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 10:34:53.958189   17646 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 10:34:53.958195   17646 command_runner.go:130] > #
	I0916 10:34:53.958201   17646 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 10:34:53.958209   17646 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 10:34:53.958214   17646 command_runner.go:130] > #
	I0916 10:34:53.958220   17646 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 10:34:53.958225   17646 command_runner.go:130] > # feature.
	I0916 10:34:53.958228   17646 command_runner.go:130] > #
	I0916 10:34:53.958235   17646 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 10:34:53.958242   17646 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 10:34:53.958248   17646 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 10:34:53.958256   17646 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 10:34:53.958263   17646 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 10:34:53.958268   17646 command_runner.go:130] > #
	I0916 10:34:53.958274   17646 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 10:34:53.958282   17646 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 10:34:53.958287   17646 command_runner.go:130] > #
	I0916 10:34:53.958293   17646 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 10:34:53.958300   17646 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 10:34:53.958306   17646 command_runner.go:130] > #
	I0916 10:34:53.958311   17646 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 10:34:53.958320   17646 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 10:34:53.958323   17646 command_runner.go:130] > # limitation.
	I0916 10:34:53.958330   17646 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 10:34:53.958334   17646 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 10:34:53.958340   17646 command_runner.go:130] > runtime_type = "oci"
	I0916 10:34:53.958345   17646 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 10:34:53.958350   17646 command_runner.go:130] > runtime_config_path = ""
	I0916 10:34:53.958355   17646 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 10:34:53.958361   17646 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 10:34:53.958365   17646 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 10:34:53.958371   17646 command_runner.go:130] > monitor_env = [
	I0916 10:34:53.958377   17646 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 10:34:53.958382   17646 command_runner.go:130] > ]
	I0916 10:34:53.958386   17646 command_runner.go:130] > privileged_without_host_devices = false
	I0916 10:34:53.958397   17646 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 10:34:53.958405   17646 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 10:34:53.958413   17646 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 10:34:53.958421   17646 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 10:34:53.958430   17646 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 10:34:53.958437   17646 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 10:34:53.958446   17646 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 10:34:53.958455   17646 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 10:34:53.958463   17646 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 10:34:53.958472   17646 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 10:34:53.958478   17646 command_runner.go:130] > # Example:
	I0916 10:34:53.958482   17646 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 10:34:53.958489   17646 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 10:34:53.958496   17646 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 10:34:53.958503   17646 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 10:34:53.958507   17646 command_runner.go:130] > # cpuset = 0
	I0916 10:34:53.958513   17646 command_runner.go:130] > # cpushares = "0-1"
	I0916 10:34:53.958517   17646 command_runner.go:130] > # Where:
	I0916 10:34:53.958523   17646 command_runner.go:130] > # The workload name is workload-type.
	I0916 10:34:53.958530   17646 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 10:34:53.958537   17646 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 10:34:53.958542   17646 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 10:34:53.958549   17646 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 10:34:53.958558   17646 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 10:34:53.958562   17646 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 10:34:53.958569   17646 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 10:34:53.958573   17646 command_runner.go:130] > # Default value is set to true
	I0916 10:34:53.958577   17646 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 10:34:53.958582   17646 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 10:34:53.958586   17646 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 10:34:53.958590   17646 command_runner.go:130] > # Default value is set to 'false'
	I0916 10:34:53.958593   17646 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 10:34:53.958599   17646 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 10:34:53.958602   17646 command_runner.go:130] > #
	I0916 10:34:53.958607   17646 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 10:34:53.958615   17646 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 10:34:53.958621   17646 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 10:34:53.958626   17646 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 10:34:53.958631   17646 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 10:34:53.958634   17646 command_runner.go:130] > [crio.image]
	I0916 10:34:53.958640   17646 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 10:34:53.958644   17646 command_runner.go:130] > # default_transport = "docker://"
	I0916 10:34:53.958649   17646 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 10:34:53.958655   17646 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:34:53.958659   17646 command_runner.go:130] > # global_auth_file = ""
	I0916 10:34:53.958664   17646 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 10:34:53.958670   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.958676   17646 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 10:34:53.958682   17646 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 10:34:53.958690   17646 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 10:34:53.958695   17646 command_runner.go:130] > # This option supports live configuration reload.
	I0916 10:34:53.958701   17646 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 10:34:53.958706   17646 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 10:34:53.958714   17646 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 10:34:53.958720   17646 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 10:34:53.958728   17646 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 10:34:53.958734   17646 command_runner.go:130] > # pause_command = "/pause"
	I0916 10:34:53.958740   17646 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 10:34:53.958748   17646 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 10:34:53.958753   17646 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 10:34:53.958759   17646 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 10:34:53.958767   17646 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 10:34:53.958776   17646 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 10:34:53.958780   17646 command_runner.go:130] > # pinned_images = [
	I0916 10:34:53.958784   17646 command_runner.go:130] > # ]
	I0916 10:34:53.958793   17646 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 10:34:53.958801   17646 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 10:34:53.958809   17646 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 10:34:53.958816   17646 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 10:34:53.958823   17646 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 10:34:53.958829   17646 command_runner.go:130] > # signature_policy = ""
	I0916 10:34:53.958836   17646 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 10:34:53.958843   17646 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 10:34:53.958851   17646 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 10:34:53.958861   17646 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 10:34:53.958867   17646 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 10:34:53.958871   17646 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 10:34:53.958877   17646 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 10:34:53.958883   17646 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 10:34:53.958887   17646 command_runner.go:130] > # changing them here.
	I0916 10:34:53.958891   17646 command_runner.go:130] > # insecure_registries = [
	I0916 10:34:53.958894   17646 command_runner.go:130] > # ]
	I0916 10:34:53.958901   17646 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 10:34:53.958905   17646 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 10:34:53.958909   17646 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 10:34:53.958913   17646 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 10:34:53.958917   17646 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 10:34:53.958923   17646 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 10:34:53.958926   17646 command_runner.go:130] > # CNI plugins.
	I0916 10:34:53.958930   17646 command_runner.go:130] > [crio.network]
	I0916 10:34:53.958935   17646 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 10:34:53.958940   17646 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 10:34:53.958944   17646 command_runner.go:130] > # cni_default_network = ""
	I0916 10:34:53.958949   17646 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 10:34:53.958953   17646 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 10:34:53.958958   17646 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 10:34:53.958961   17646 command_runner.go:130] > # plugin_dirs = [
	I0916 10:34:53.958964   17646 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 10:34:53.958968   17646 command_runner.go:130] > # ]
	I0916 10:34:53.958973   17646 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 10:34:53.958976   17646 command_runner.go:130] > [crio.metrics]
	I0916 10:34:53.958980   17646 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 10:34:53.958984   17646 command_runner.go:130] > enable_metrics = true
	I0916 10:34:53.958988   17646 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 10:34:53.958992   17646 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 10:34:53.958998   17646 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 10:34:53.959004   17646 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 10:34:53.959009   17646 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 10:34:53.959013   17646 command_runner.go:130] > # metrics_collectors = [
	I0916 10:34:53.959016   17646 command_runner.go:130] > # 	"operations",
	I0916 10:34:53.959023   17646 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 10:34:53.959030   17646 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 10:34:53.959035   17646 command_runner.go:130] > # 	"operations_errors",
	I0916 10:34:53.959041   17646 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 10:34:53.959046   17646 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 10:34:53.959052   17646 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 10:34:53.959056   17646 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 10:34:53.959062   17646 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 10:34:53.959066   17646 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 10:34:53.959073   17646 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 10:34:53.959078   17646 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 10:34:53.959084   17646 command_runner.go:130] > # 	"containers_oom_total",
	I0916 10:34:53.959088   17646 command_runner.go:130] > # 	"containers_oom",
	I0916 10:34:53.959094   17646 command_runner.go:130] > # 	"processes_defunct",
	I0916 10:34:53.959097   17646 command_runner.go:130] > # 	"operations_total",
	I0916 10:34:53.959102   17646 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 10:34:53.959108   17646 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 10:34:53.959113   17646 command_runner.go:130] > # 	"operations_errors_total",
	I0916 10:34:53.959119   17646 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 10:34:53.959124   17646 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 10:34:53.959130   17646 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 10:34:53.959134   17646 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 10:34:53.959140   17646 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 10:34:53.959145   17646 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 10:34:53.959151   17646 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 10:34:53.959156   17646 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 10:34:53.959160   17646 command_runner.go:130] > # ]
	I0916 10:34:53.959165   17646 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 10:34:53.959171   17646 command_runner.go:130] > # metrics_port = 9090
	I0916 10:34:53.959175   17646 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 10:34:53.959181   17646 command_runner.go:130] > # metrics_socket = ""
	I0916 10:34:53.959186   17646 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 10:34:53.959194   17646 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 10:34:53.959202   17646 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 10:34:53.959209   17646 command_runner.go:130] > # certificate on any modification event.
	I0916 10:34:53.959214   17646 command_runner.go:130] > # metrics_cert = ""
	I0916 10:34:53.959221   17646 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 10:34:53.959228   17646 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 10:34:53.959232   17646 command_runner.go:130] > # metrics_key = ""
	I0916 10:34:53.959240   17646 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 10:34:53.959243   17646 command_runner.go:130] > [crio.tracing]
	I0916 10:34:53.959250   17646 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 10:34:53.959256   17646 command_runner.go:130] > # enable_tracing = false
	I0916 10:34:53.959261   17646 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 10:34:53.959268   17646 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 10:34:53.959274   17646 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 10:34:53.959282   17646 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 10:34:53.959287   17646 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 10:34:53.959290   17646 command_runner.go:130] > [crio.nri]
	I0916 10:34:53.959294   17646 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 10:34:53.959300   17646 command_runner.go:130] > # enable_nri = false
	I0916 10:34:53.959304   17646 command_runner.go:130] > # NRI socket to listen on.
	I0916 10:34:53.959311   17646 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 10:34:53.959315   17646 command_runner.go:130] > # NRI plugin directory to use.
	I0916 10:34:53.959322   17646 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 10:34:53.959327   17646 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 10:34:53.959334   17646 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 10:34:53.959339   17646 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 10:34:53.959345   17646 command_runner.go:130] > # nri_disable_connections = false
	I0916 10:34:53.959350   17646 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 10:34:53.959357   17646 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 10:34:53.959362   17646 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 10:34:53.959368   17646 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 10:34:53.959373   17646 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 10:34:53.959380   17646 command_runner.go:130] > [crio.stats]
	I0916 10:34:53.959385   17646 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 10:34:53.959392   17646 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 10:34:53.959397   17646 command_runner.go:130] > # stats_collection_period = 0
	I0916 10:34:53.959484   17646 cni.go:84] Creating CNI manager for ""
	I0916 10:34:53.959498   17646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:34:53.959505   17646 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:34:53.959524   17646 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553844 NodeName:functional-553844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:34:53.959634   17646 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553844"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:34:53.959689   17646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:34:53.969814   17646 command_runner.go:130] > kubeadm
	I0916 10:34:53.969837   17646 command_runner.go:130] > kubectl
	I0916 10:34:53.969841   17646 command_runner.go:130] > kubelet
	I0916 10:34:53.969861   17646 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:34:53.969900   17646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:34:53.979269   17646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:34:53.995958   17646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:34:54.012835   17646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0916 10:34:54.028998   17646 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0916 10:34:54.032749   17646 command_runner.go:130] > 192.168.39.230	control-plane.minikube.internal
	I0916 10:34:54.032827   17646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:34:54.161068   17646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:34:54.176070   17646 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844 for IP: 192.168.39.230
	I0916 10:34:54.176090   17646 certs.go:194] generating shared ca certs ...
	I0916 10:34:54.176110   17646 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:34:54.176254   17646 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:34:54.176317   17646 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:34:54.176330   17646 certs.go:256] generating profile certs ...
	I0916 10:34:54.176420   17646 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.key
	I0916 10:34:54.176512   17646 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key.7b9f73b3
	I0916 10:34:54.176593   17646 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key
	I0916 10:34:54.176607   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:34:54.176628   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:34:54.176648   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:34:54.176667   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:34:54.176685   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:34:54.176705   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:34:54.176723   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:34:54.176741   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:34:54.176801   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:34:54.176839   17646 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:34:54.176854   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:34:54.176889   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:34:54.176922   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:34:54.176954   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:34:54.177008   17646 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:34:54.177047   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.177066   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.177084   17646 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.177619   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:34:54.201622   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:34:54.224717   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:34:54.248747   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:34:54.272001   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:34:54.295257   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:34:54.318394   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:34:54.341470   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:34:54.364947   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:34:54.388405   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:34:54.411730   17646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:34:54.434855   17646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:34:54.451644   17646 ssh_runner.go:195] Run: openssl version
	I0916 10:34:54.457529   17646 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 10:34:54.457603   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:34:54.468568   17646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.473071   17646 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.473146   17646 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.473200   17646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:34:54.478979   17646 command_runner.go:130] > 3ec20f2e
	I0916 10:34:54.479053   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:34:54.489001   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:34:54.500128   17646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.504474   17646 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.504658   17646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.504709   17646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:34:54.510639   17646 command_runner.go:130] > b5213941
	I0916 10:34:54.510799   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:34:54.520662   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:34:54.535566   17646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.551885   17646 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.551929   17646 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.551989   17646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:34:54.614475   17646 command_runner.go:130] > 51391683
	I0916 10:34:54.614580   17646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:34:54.712068   17646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:34:54.725729   17646 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:34:54.725769   17646 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:34:54.725780   17646 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0916 10:34:54.725790   17646 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:34:54.725801   17646 command_runner.go:130] > Access: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.725811   17646 command_runner.go:130] > Modify: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.725822   17646 command_runner.go:130] > Change: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.725835   17646 command_runner.go:130] >  Birth: 2024-09-16 10:34:13.705744477 +0000
	I0916 10:34:54.730463   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:34:54.781875   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.782236   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:34:54.799129   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.799393   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:34:54.828763   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.828862   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:34:54.888492   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.888578   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:34:54.915347   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.915973   17646 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:34:54.930753   17646 command_runner.go:130] > Certificate will not expire
	I0916 10:34:54.930839   17646 kubeadm.go:392] StartCluster: {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:54.930964   17646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:34:54.931040   17646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:34:55.206723   17646 command_runner.go:130] > 29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e
	I0916 10:34:55.206750   17646 command_runner.go:130] > 0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866
	I0916 10:34:55.206762   17646 command_runner.go:130] > e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621
	I0916 10:34:55.206768   17646 command_runner.go:130] > 665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a
	I0916 10:34:55.206774   17646 command_runner.go:130] > 84f3fbe9bc0e50f69d1a350e13463be07e27d165bbc881a004c0f0f48f00d581
	I0916 10:34:55.206779   17646 command_runner.go:130] > 5449e3e53c664617d9083167551c07e0692164390fe890faa6c2acf448711d41
	I0916 10:34:55.206784   17646 command_runner.go:130] > baf4cdc69419d6532efbce0cbe3f72712e6252baabc945ce9b974815304046ba
	I0916 10:34:55.206792   17646 command_runner.go:130] > 84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515
	I0916 10:34:55.210089   17646 cri.go:89] found id: "29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	I0916 10:34:55.210113   17646 cri.go:89] found id: "0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866"
	I0916 10:34:55.210119   17646 cri.go:89] found id: "e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621"
	I0916 10:34:55.210124   17646 cri.go:89] found id: "665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a"
	I0916 10:34:55.210128   17646 cri.go:89] found id: "84f3fbe9bc0e50f69d1a350e13463be07e27d165bbc881a004c0f0f48f00d581"
	I0916 10:34:55.210134   17646 cri.go:89] found id: "5449e3e53c664617d9083167551c07e0692164390fe890faa6c2acf448711d41"
	I0916 10:34:55.210138   17646 cri.go:89] found id: "baf4cdc69419d6532efbce0cbe3f72712e6252baabc945ce9b974815304046ba"
	I0916 10:34:55.210141   17646 cri.go:89] found id: "84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515"
	I0916 10:34:55.210145   17646 cri.go:89] found id: ""
	I0916 10:34:55.210194   17646 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 10:34:55.279740   17646 command_runner.go:130] ! load container 11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02: container does not exist
	I0916 10:34:55.311240   17646 command_runner.go:130] ! load container 5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb: container does not exist
	I0916 10:34:55.354559   17646 command_runner.go:130] ! load container dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a: container does not exist
	
	
	==> CRI-O <==
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.171810568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482934171779762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f739381b-4f87-4bf2-a5a2-01404205d9ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.172509554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80feb1c0-396c-47ef-acb7-fe67e7379b0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.172585030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80feb1c0-396c-47ef-acb7-fe67e7379b0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.173088532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80feb1c0-396c-47ef-acb7-fe67e7379b0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.214334798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=191ca476-6445-468c-bffd-ca82bcf55b66 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.214411900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=191ca476-6445-468c-bffd-ca82bcf55b66 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.215819791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14094ce1-e14a-447e-9168-e3184fade3b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.216268092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482934216243241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14094ce1-e14a-447e-9168-e3184fade3b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.217094363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10a5749d-a545-4e6f-9e71-04b472a42f9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.217152294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10a5749d-a545-4e6f-9e71-04b472a42f9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.218325955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10a5749d-a545-4e6f-9e71-04b472a42f9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.261741272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96f30e63-1543-4c87-9e70-1c9c7a86ce91 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.261816637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96f30e63-1543-4c87-9e70-1c9c7a86ce91 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.263217268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e5e03d8-9c53-49e1-91de-28c92cc2792f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.263571156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482934263548267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e5e03d8-9c53-49e1-91de-28c92cc2792f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.264157909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11be1de7-4d93-47c0-a812-fb962b0d64bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.264214081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11be1de7-4d93-47c0-a812-fb962b0d64bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.264519478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11be1de7-4d93-47c0-a812-fb962b0d64bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.296639537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddb028da-7ad2-495a-8100-c0b370e322e9 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.296715193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddb028da-7ad2-495a-8100-c0b370e322e9 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.299108238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8864f74a-b234-4f37-bd5e-a1d6c3bab153 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.299501034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482934299476060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8864f74a-b234-4f37-bd5e-a1d6c3bab153 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.300106586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f69e277-d67d-48c5-a778-d505132a9eaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.300162451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f69e277-d67d-48c5-a778-d505132a9eaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:35:34 functional-553844 crio[2242]: time="2024-09-16 10:35:34.300441147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482907909898403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482907861478054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482907858255598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482895162463099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482895179741498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482895774138461,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482895106585225,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[str
ing]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482894975335593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482894944421475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482894870543821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866,PodSandboxId:b2f2f51ddb95b3e9dbe57ebb21f9bf4c21eb43272b2604370d591f616375026b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482869566873626,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621,PodSandboxId:1d959480e71233b44443c2da5a38dc6f17f715531f622ace35f4a230f333de17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482869390985781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kuberne
tes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a,PodSandboxId:53f5b7dda836048946df712ae9b391241a8de9d30959a188c8aee4c8ba71382e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482869036696040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515,PodSandboxId:f4b841c3fa1896c534356912d55f4f0f87af6b9539af5b549eb238f45b8ff959,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482857879938064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f69e277-d67d-48c5-a778-d505132a9eaf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   26 seconds ago       Running             kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   26 seconds ago       Running             kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   26 seconds ago       Running             kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   38 seconds ago       Running             coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   39 seconds ago       Running             storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   39 seconds ago       Running             kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   39 seconds ago       Running             etcd                      1                   b212b903ed97c       etcd-functional-553844
	3e06948fb7d78       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   39 seconds ago       Exited              kube-controller-manager   1                   786e02c9f268f       kube-controller-manager-functional-553844
	a3fe318aca7e7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   39 seconds ago       Exited              kube-apiserver            1                   f630bd7b31a99       kube-apiserver-functional-553844
	29f56fdf2e13c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   39 seconds ago       Exited              kube-scheduler            1                   224c8313d2a4b       kube-scheduler-functional-553844
	0718da2983026       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       0                   b2f2f51ddb95b       storage-provisioner
	e2067f72690f6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   1d959480e7123       coredns-7c65d6cfc9-ntnpc
	665e5ce6ab7a5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   53f5b7dda8360       kube-proxy-8d5zp
	84edb04959b2d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   f4b841c3fa189       etcd-functional-553844
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	
	
	==> coredns [e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44555 - 37636 "HINFO IN 1428552004750772321.6386749862655392797. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.155382227s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:35:10 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     66s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         71s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 36s                kube-proxy       
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s                kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeReady                70s                kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           67s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[Sep16 10:34] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.060120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061281] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.192979] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.124868] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.273205] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.973731] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.437848] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.066860] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.492744] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.076580] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.721276] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.603762] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	
	
	==> etcd [84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515] <==
	{"level":"info","ts":"2024-09-16T10:34:19.704561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:19.704668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:19.706149Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.707329Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:19.707491Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:19.707836Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:19.708720Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:19.709583Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:19.710343Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.707961Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.710492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.710531Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:19.710611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:19.708779Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:19.710874Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:39.151449Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:39.151612Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:39.151734Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:39.151823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:39.218449Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:39.218489Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:39.218574Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:34:39.417180Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:39.417312Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:39.417337Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:55.917354Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","added-peer-id":"f4acae94ef986412","added-peer-peer-urls":["https://192.168.39.230:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:55.917463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:55.917506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:55.918704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:55.932222Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:55.933659Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:55.933712Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:34:55.933947Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:55.936086Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:56.955096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> kernel <==
	 10:35:34 up 1 min,  0 users,  load average: 0.25, 0.10, 0.04
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539] <==
	I0916 10:34:58.362657       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0916 10:34:58.362698       1 secure_serving.go:258] Stopped listening on [::]:8441
	I0916 10:34:58.362728       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:34:58.363145       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:34:58.369146       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0916 10:34:58.389365       1 controller.go:157] Shutting down quota evaluator
	I0916 10:34:58.389399       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390157       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390251       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390276       1 controller.go:176] quota evaluator worker shutdown
	I0916 10:34:58.390282       1 controller.go:176] quota evaluator worker shutdown
	E0916 10:34:59.144838       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.144899       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	W0916 10:35:00.144926       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:00.145224       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.145011       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:01.145158       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.145262       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:02.145467       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:03.145393       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:03.145608       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:04.145258       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:04.145649       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:05.144740       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8441/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8441: connect: connection refused. Retrying...
	E0916 10:35:05.145003       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8441: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.817642       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c] <==
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120935       1 shared_informer.go:320] Caches are synced for expand
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:34:29.358558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:34:29.370671       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:34:29.370729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:29.497786       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:34:29.497892       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:34:29.497969       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:29.504350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:29.504625       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:29.504656       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:29.512871       1 config.go:199] "Starting service config controller"
	I0916 10:34:29.512919       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:29.512969       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:29.512973       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:29.513465       1 config.go:328] "Starting node config controller"
	I0916 10:34:29.513501       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:29.613132       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:34:29.613189       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:29.615147       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e] <==
	I0916 10:34:56.127606       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:34:58.216123       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:34:58.216271       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:34:58.216330       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:34:58.216338       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:34:58.329214       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:34:58.329252       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:58.339781       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:34:58.339820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:34:58.341161       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:34:58.339879       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:34:58.441945       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:05.904806       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:05.904973       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0916 10:35:05.905193       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.645293    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cf351cdb4e05fb19a16881fc8f9a8bc-usr-share-ca-certificates\") pod \"kube-apiserver-functional-553844\" (UID: \"0cf351cdb4e05fb19a16881fc8f9a8bc\") " pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.645315    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba1ce2146f556353256cee766fb22aa-k8s-certs\") pod \"kube-controller-manager-functional-553844\" (UID: \"0ba1ce2146f556353256cee766fb22aa\") " pod="kube-system/kube-controller-manager-functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.645334    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e9406d783b81f1f83bb9b03dd50757a-kubeconfig\") pod \"kube-scheduler-functional-553844\" (UID: \"8e9406d783b81f1f83bb9b03dd50757a\") " pod="kube-system/kube-scheduler-functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.788377    3203 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: E0916 10:35:07.789378    3203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.230:8441: connect: connection refused" node="functional-553844"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.840020    3203 scope.go:117] "RemoveContainer" containerID="a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.840577    3203 scope.go:117] "RemoveContainer" containerID="3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c"
	Sep 16 10:35:07 functional-553844 kubelet[3203]: I0916 10:35:07.842114    3203 scope.go:117] "RemoveContainer" containerID="29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	Sep 16 10:35:08 functional-553844 kubelet[3203]: E0916 10:35:08.005440    3203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-553844?timeout=10s\": dial tcp 192.168.39.230:8441: connect: connection refused" interval="800ms"
	Sep 16 10:35:08 functional-553844 kubelet[3203]: I0916 10:35:08.191646    3203 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.876113    3203 kubelet_node_status.go:111] "Node was previously registered" node="functional-553844"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.876237    3203 kubelet_node_status.go:75] "Successfully registered node" node="functional-553844"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.876264    3203 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: I0916 10:35:10.877661    3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:35:10 functional-553844 kubelet[3203]: E0916 10:35:10.910901    3203 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-functional-553844\" already exists" pod="kube-system/etcd-functional-553844"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.386721    3203 apiserver.go:52] "Watching apiserver"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.413817    3203 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.477402    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.477604    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: I0916 10:35:11.477696    3203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:35:11 functional-553844 kubelet[3203]: E0916 10:35:11.564093    3203 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-functional-553844\" already exists" pod="kube-system/etcd-functional-553844"
	Sep 16 10:35:17 functional-553844 kubelet[3203]: E0916 10:35:17.490489    3203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482917487632453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:17 functional-553844 kubelet[3203]: E0916 10:35:17.490529    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482917487632453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:27 functional-553844 kubelet[3203]: E0916 10:35:27.494110    3203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482927493621389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:35:27 functional-553844 kubelet[3203]: E0916 10:35:27.494139    3203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482927493621389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866] <==
	I0916 10:34:29.683223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:35:33.848838   18024 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (454.14µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubectlGetPods (1.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-553844 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-553844 get po -l tier=control-plane -n kube-system -o=json: fork/exec /usr/local/bin/kubectl: exec format error (413.097µs)
functional_test.go:812: failed to get components. args "kubectl --context functional-553844 get po -l tier=control-plane -n kube-system -o=json": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (1.419382551s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-263701 --log_dir                                                  | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir                                                  | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-263701 --log_dir                                                  | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir                                                  | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir                                                  | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-263701 --log_dir                                                  | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | /tmp/nospam-263701 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-263701                                                         | nospam-263701     | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	| start   | -p functional-553844                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:34 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-553844                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:35 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-553844 cache add                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-553844 cache add                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-553844 cache add                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-553844 cache add                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | minikube-local-cache-test:functional-553844                              |                   |         |         |                     |                     |
	| cache   | functional-553844 cache delete                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | minikube-local-cache-test:functional-553844                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| ssh     | functional-553844 ssh sudo                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-553844                                                        | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-553844 cache reload                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	| ssh     | functional-553844 ssh                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-553844 kubectl --                                             | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:35 UTC |
	|         | --context functional-553844                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-553844                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:35 UTC | 16 Sep 24 10:36 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:42.602736   18525 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:42.602961   18525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:42.602964   18525 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:42.602967   18525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:42.603134   18525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:35:42.603625   18525 out.go:352] Setting JSON to false
	I0916 10:35:42.604487   18525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1093,"bootTime":1726481850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:42.604573   18525 start.go:139] virtualization: kvm guest
	I0916 10:35:42.606812   18525 out.go:177] * [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:42.608453   18525 notify.go:220] Checking for updates...
	I0916 10:35:42.608460   18525 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:42.609720   18525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:42.610980   18525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:35:42.612026   18525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:35:42.613154   18525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:42.614469   18525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:42.616082   18525 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:42.616181   18525 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:42.616564   18525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:35:42.616592   18525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:35:42.631459   18525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0916 10:35:42.631931   18525 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:35:42.632471   18525 main.go:141] libmachine: Using API Version  1
	I0916 10:35:42.632493   18525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:35:42.632799   18525 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:35:42.632949   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.666224   18525 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:35:42.667731   18525 start.go:297] selected driver: kvm2
	I0916 10:35:42.667739   18525 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:42.667845   18525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:42.668158   18525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:35:42.668237   18525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:35:42.683577   18525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:35:42.684216   18525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:35:42.684245   18525 cni.go:84] Creating CNI manager for ""
	I0916 10:35:42.684291   18525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:35:42.684354   18525 start.go:340] cluster config:
	{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:42.684461   18525 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:35:42.686264   18525 out.go:177] * Starting "functional-553844" primary control-plane node in "functional-553844" cluster
	I0916 10:35:42.687758   18525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:35:42.687806   18525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:35:42.687813   18525 cache.go:56] Caching tarball of preloaded images
	I0916 10:35:42.687893   18525 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:35:42.687899   18525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:35:42.687986   18525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/config.json ...
	I0916 10:35:42.688155   18525 start.go:360] acquireMachinesLock for functional-553844: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:35:42.688216   18525 start.go:364] duration metric: took 49.309µs to acquireMachinesLock for "functional-553844"
	I0916 10:35:42.688231   18525 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:35:42.688235   18525 fix.go:54] fixHost starting: 
	I0916 10:35:42.688466   18525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:35:42.688492   18525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:35:42.703053   18525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0916 10:35:42.703530   18525 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:35:42.704035   18525 main.go:141] libmachine: Using API Version  1
	I0916 10:35:42.704064   18525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:35:42.704371   18525 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:35:42.704542   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.704677   18525 main.go:141] libmachine: (functional-553844) Calling .GetState
	I0916 10:35:42.706051   18525 fix.go:112] recreateIfNeeded on functional-553844: state=Running err=<nil>
	W0916 10:35:42.706062   18525 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:35:42.707728   18525 out.go:177] * Updating the running kvm2 "functional-553844" VM ...
	I0916 10:35:42.708861   18525 machine.go:93] provisionDockerMachine start ...
	I0916 10:35:42.708874   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.709063   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.711297   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.711619   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.711641   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.711812   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.711970   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.712095   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.712241   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.712367   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.712549   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.712554   18525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:35:42.822279   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:35:42.822297   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:42.822514   18525 buildroot.go:166] provisioning hostname "functional-553844"
	I0916 10:35:42.822541   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:42.822705   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.825390   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.825774   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.825794   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.825955   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.826114   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.826244   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.826444   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.826605   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.826756   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.826762   18525 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-553844 && echo "functional-553844" | sudo tee /etc/hostname
	I0916 10:35:42.947055   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:35:42.947086   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.949554   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.949872   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.949895   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.949977   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.950263   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.950397   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.950516   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.950660   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.950825   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.950834   18525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553844/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:35:43.057989   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:35:43.058009   18525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:35:43.058034   18525 buildroot.go:174] setting up certificates
	I0916 10:35:43.058041   18525 provision.go:84] configureAuth start
	I0916 10:35:43.058048   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:43.058310   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:43.060530   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.060834   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.060857   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.060950   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.063120   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.063409   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.063432   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.063485   18525 provision.go:143] copyHostCerts
	I0916 10:35:43.063549   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:35:43.063555   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:35:43.063615   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:35:43.063703   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:35:43.063707   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:35:43.063728   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:35:43.063790   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:35:43.063793   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:35:43.063811   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:35:43.063906   18525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.functional-553844 san=[127.0.0.1 192.168.39.230 functional-553844 localhost minikube]
	I0916 10:35:43.318125   18525 provision.go:177] copyRemoteCerts
	I0916 10:35:43.318179   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:35:43.318199   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.320675   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.320954   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.320979   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.321086   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:43.321278   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.321405   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:43.321526   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:43.408363   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:35:43.433926   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:35:43.459098   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:35:43.483570   18525 provision.go:87] duration metric: took 425.518643ms to configureAuth
	I0916 10:35:43.483586   18525 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:35:43.483776   18525 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:43.483836   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.486393   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.486676   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.486698   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.486844   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:43.487010   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.487138   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.487238   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:43.487355   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:43.487542   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:43.487551   18525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:35:49.077005   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:35:49.077018   18525 machine.go:96] duration metric: took 6.368149184s to provisionDockerMachine
	I0916 10:35:49.077029   18525 start.go:293] postStartSetup for "functional-553844" (driver="kvm2")
	I0916 10:35:49.077041   18525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:35:49.077060   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.077417   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:35:49.077437   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.080182   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.080466   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.080480   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.080612   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.080806   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.080943   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.081100   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.164278   18525 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:35:49.168341   18525 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:35:49.168356   18525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:35:49.168457   18525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:35:49.168550   18525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:35:49.168630   18525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> hosts in /etc/test/nested/copy/11203
	I0916 10:35:49.168671   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11203
	I0916 10:35:49.178688   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:35:49.203299   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts --> /etc/test/nested/copy/11203/hosts (40 bytes)
	I0916 10:35:49.227238   18525 start.go:296] duration metric: took 150.19355ms for postStartSetup
	I0916 10:35:49.227270   18525 fix.go:56] duration metric: took 6.5390335s for fixHost
	I0916 10:35:49.227292   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.229721   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.230084   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.230108   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.230254   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.230400   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.230525   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.230675   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.230824   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:49.230971   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:49.230975   18525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:35:49.337843   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482949.326826151
	
	I0916 10:35:49.337854   18525 fix.go:216] guest clock: 1726482949.326826151
	I0916 10:35:49.337863   18525 fix.go:229] Guest: 2024-09-16 10:35:49.326826151 +0000 UTC Remote: 2024-09-16 10:35:49.227273795 +0000 UTC m=+6.659405209 (delta=99.552356ms)
	I0916 10:35:49.337905   18525 fix.go:200] guest clock delta is within tolerance: 99.552356ms
	I0916 10:35:49.337909   18525 start.go:83] releasing machines lock for "functional-553844", held for 6.649688194s
	I0916 10:35:49.337930   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.338155   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:49.340737   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.341087   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.341111   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.341237   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341760   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341890   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341938   18525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:35:49.341973   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.342020   18525 ssh_runner.go:195] Run: cat /version.json
	I0916 10:35:49.342027   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.344444   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344803   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.344824   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344842   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344991   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.345141   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.345260   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.345273   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.345292   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.345448   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.345461   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.345608   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.345747   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.345877   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.443002   18525 ssh_runner.go:195] Run: systemctl --version
	I0916 10:35:49.449614   18525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:35:49.596269   18525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:35:49.602475   18525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:35:49.602526   18525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:35:49.611756   18525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:35:49.611766   18525 start.go:495] detecting cgroup driver to use...
	I0916 10:35:49.611824   18525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:35:49.628855   18525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:35:49.642697   18525 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:35:49.642752   18525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:35:49.656384   18525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:35:49.669903   18525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:35:49.802721   18525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:35:49.941918   18525 docker.go:233] disabling docker service ...
	I0916 10:35:49.941969   18525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:35:49.958790   18525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:35:49.973275   18525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:35:50.101548   18525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:35:50.229058   18525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:35:50.243779   18525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:35:50.264191   18525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:35:50.264234   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.274752   18525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:35:50.274787   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.285273   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.295681   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.306207   18525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:35:50.316754   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.326994   18525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.338261   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.348587   18525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:35:50.358102   18525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:35:50.367334   18525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:35:50.494296   18525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:35:57.749446   18525 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.255125663s)
	I0916 10:35:57.749465   18525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:35:57.749513   18525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:35:57.754558   18525 start.go:563] Will wait 60s for crictl version
	I0916 10:35:57.754608   18525 ssh_runner.go:195] Run: which crictl
	I0916 10:35:57.758591   18525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:35:57.797435   18525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:35:57.797514   18525 ssh_runner.go:195] Run: crio --version
	I0916 10:35:57.826212   18525 ssh_runner.go:195] Run: crio --version
	I0916 10:35:57.857475   18525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:35:57.858682   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:57.861189   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:57.861453   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:57.861474   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:57.861620   18525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:35:57.867598   18525 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:35:57.868983   18525 kubeadm.go:883] updating cluster {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:35:57.869107   18525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:35:57.869177   18525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:35:57.914399   18525 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:35:57.914408   18525 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:35:57.914450   18525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:35:57.949560   18525 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:35:57.949570   18525 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:35:57.949575   18525 kubeadm.go:934] updating node { 192.168.39.230 8441 v1.31.1 crio true true} ...
	I0916 10:35:57.949666   18525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:35:57.949729   18525 ssh_runner.go:195] Run: crio config
	I0916 10:35:57.995982   18525 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:35:57.996009   18525 cni.go:84] Creating CNI manager for ""
	I0916 10:35:57.996022   18525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:35:57.996030   18525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:35:57.996057   18525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553844 NodeName:functional-553844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:35:57.996174   18525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553844"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:35:57.996229   18525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:35:58.006808   18525 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:35:58.006895   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:35:58.016928   18525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:35:58.034395   18525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:35:58.051467   18525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2011 bytes)
	I0916 10:35:58.068995   18525 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0916 10:35:58.072954   18525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:35:58.201848   18525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:35:58.217243   18525 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844 for IP: 192.168.39.230
	I0916 10:35:58.217256   18525 certs.go:194] generating shared ca certs ...
	I0916 10:35:58.217271   18525 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:35:58.217440   18525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:35:58.217483   18525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:35:58.217490   18525 certs.go:256] generating profile certs ...
	I0916 10:35:58.217589   18525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.key
	I0916 10:35:58.217652   18525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key.7b9f73b3
	I0916 10:35:58.217696   18525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key
	I0916 10:35:58.217831   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:35:58.217868   18525 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:35:58.217877   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:35:58.217903   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:35:58.217930   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:35:58.217957   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:35:58.218005   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:35:58.218755   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:35:58.243657   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:35:58.267838   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:35:58.291555   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:35:58.315510   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:35:58.339081   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:35:58.362662   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:35:58.386270   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:35:58.410573   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:35:58.434749   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:35:58.459501   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:35:58.482757   18525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:35:58.499985   18525 ssh_runner.go:195] Run: openssl version
	I0916 10:35:58.505649   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:35:58.516720   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.521314   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.521366   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.527133   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:35:58.537092   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:35:58.548863   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.553739   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.553789   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.559937   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:35:58.570077   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:35:58.581619   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.586334   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.586385   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.592259   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:35:58.602417   18525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:35:58.607018   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:35:58.612758   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:35:58.618471   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:35:58.623983   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:35:58.629681   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:35:58.635363   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:35:58.640927   18525 kubeadm.go:392] StartCluster: {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:58.641024   18525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:35:58.641097   18525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:35:58.678179   18525 cri.go:89] found id: "c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030"
	I0916 10:35:58.678193   18525 cri.go:89] found id: "7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147"
	I0916 10:35:58.678197   18525 cri.go:89] found id: "a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12"
	I0916 10:35:58.678200   18525 cri.go:89] found id: "8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324"
	I0916 10:35:58.678203   18525 cri.go:89] found id: "11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02"
	I0916 10:35:58.678206   18525 cri.go:89] found id: "5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb"
	I0916 10:35:58.678209   18525 cri.go:89] found id: "dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a"
	I0916 10:35:58.678212   18525 cri.go:89] found id: "3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c"
	I0916 10:35:58.678214   18525 cri.go:89] found id: "a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539"
	I0916 10:35:58.678221   18525 cri.go:89] found id: "29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	I0916 10:35:58.678223   18525 cri.go:89] found id: "0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866"
	I0916 10:35:58.678224   18525 cri.go:89] found id: "e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621"
	I0916 10:35:58.678226   18525 cri.go:89] found id: "665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a"
	I0916 10:35:58.678228   18525 cri.go:89] found id: "84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515"
	I0916 10:35:58.678230   18525 cri.go:89] found id: ""
	I0916 10:35:58.678271   18525 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.839871979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482982839849390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=231edee8-2ca8-40d4-9714-95ed1747a3f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.840349378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6501b1fd-0648-4cfe-b414-c108bc3971d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.840435116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6501b1fd-0648-4cfe-b414-c108bc3971d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.840712335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6501b1fd-0648-4cfe-b414-c108bc3971d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.884359016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3b5a57b-d17f-4282-bc9c-393b33b4fe03 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.884435824Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3b5a57b-d17f-4282-bc9c-393b33b4fe03 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.885951894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49c9af03-7dd8-4590-8d42-87762bed41aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.886686055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482982886659546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49c9af03-7dd8-4590-8d42-87762bed41aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.887341250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa4094e9-b380-43f0-afae-5e3f2486b2af name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.887394902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa4094e9-b380-43f0-afae-5e3f2486b2af name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.887656395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa4094e9-b380-43f0-afae-5e3f2486b2af name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.928820256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3ae557a-c1a1-4865-b4ee-66561e6e6c6e name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.928892136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3ae557a-c1a1-4865-b4ee-66561e6e6c6e name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.930452568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc76f86b-1999-48b4-9e43-18909f4fab1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.931133538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482982931109027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc76f86b-1999-48b4-9e43-18909f4fab1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.931761253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aae3e97a-2b0e-49c3-b80a-6e1518b0ff05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.931811942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aae3e97a-2b0e-49c3-b80a-6e1518b0ff05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.932249448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aae3e97a-2b0e-49c3-b80a-6e1518b0ff05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.967935193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=150d7afa-2b88-4322-8e23-d796819da515 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.968019033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=150d7afa-2b88-4322-8e23-d796819da515 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.969530459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=204e44c5-0275-43ea-9d54-8e496e39f659 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.969967072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482982969945467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=204e44c5-0275-43ea-9d54-8e496e39f659 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.970613546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14213cfc-ad23-4793-b12a-1a49dcb277ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.970666763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14213cfc-ad23-4793-b12a-1a49dcb277ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:22 functional-553844 crio[4747]: time="2024-09-16 10:36:22.970970788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14213cfc-ad23-4793-b12a-1a49dcb277ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11b04a7db7923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 seconds ago       Running             coredns                   2                   42c99506917bd       coredns-7c65d6cfc9-ntnpc
	f6cef4575c2c3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   18 seconds ago       Running             kube-proxy                2                   b5b2cd4351861       kube-proxy-8d5zp
	410bd23d1eb3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 seconds ago       Running             storage-provisioner       2                   66c3c1fc355f3       storage-provisioner
	281ad6489fa86       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   22 seconds ago       Running             kube-scheduler            3                   30d387489b797       kube-scheduler-functional-553844
	161c7c3a6dbc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago       Running             etcd                      2                   1cf845fd98fb9       etcd-functional-553844
	c9f67c6f5bac2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago       Running             kube-controller-manager   3                   7ff3b4db4c3a1       kube-controller-manager-functional-553844
	40e128caccd10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago       Running             kube-apiserver            0                   4f30e9290df9f       kube-apiserver-functional-553844
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Exited              kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   b212b903ed97c       etcd-functional-553844
	
	
	==> coredns [11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 64894 "HINFO IN 1843759644485451532.7278217676100105798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028340041s
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     115s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeReady                119s               kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           116s               node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[  +0.603762] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[ +21.316095] systemd-fstab-generator[4674]: Ignoring "noauto" option for root device
	[  +0.074178] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066789] systemd-fstab-generator[4686]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[4700]: Ignoring "noauto" option for root device
	[  +0.128627] systemd-fstab-generator[4712]: Ignoring "noauto" option for root device
	[  +0.261837] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +7.709349] systemd-fstab-generator[4854]: Ignoring "noauto" option for root device
	[  +0.074913] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.702685] systemd-fstab-generator[4977]: Ignoring "noauto" option for root device
	[Sep16 10:36] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.334379] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.139453] systemd-fstab-generator[5796]: Ignoring "noauto" option for root device
	
	
	==> etcd [161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff] <==
	{"level":"info","ts":"2024-09-16T10:36:01.399752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:01.404521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:36:01.412218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:36:01.412273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:36:01.412554Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.412584Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.415172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:02.339007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.345885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:02.345893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.346138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.346171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.345925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.348114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:36:02.348659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:35:43.615417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:35:43.615457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:35:43.615668Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.615755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715379Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:35:43.716847Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:35:43.720365Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720475Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720485Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> kernel <==
	 10:36:23 up 2 min,  0 users,  load average: 0.68, 0.25, 0.09
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5] <==
	I0916 10:36:03.700643       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:36:03.700962       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:36:03.702154       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:36:03.702186       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:36:03.702192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:36:03.702197       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:36:03.704489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:36:03.704920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:36:03.704998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:36:03.705227       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:36:03.705335       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:36:03.705520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:36:03.709308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:36:03.709342       1 policy_source.go:224] refreshing policies
	I0916 10:36:03.714744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:36:03.724995       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:36:03.733976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:36:04.601449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:36:05.413610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:36:05.430933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:36:05.470801       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:36:05.494981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:36:05.501594       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:36:07.306638       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:36:07.353251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:35:43.644862       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	I0916 10:35:41.634518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-controller-manager [c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8] <==
	I0916 10:36:07.006747       1 shared_informer.go:320] Caches are synced for deployment
	I0916 10:36:07.009845       1 shared_informer.go:320] Caches are synced for node
	I0916 10:36:07.009955       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 10:36:07.010006       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:36:07.010065       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:36:07.010073       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:36:07.010176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	I0916 10:36:07.017945       1 shared_informer.go:320] Caches are synced for namespace
	I0916 10:36:07.018019       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:36:07.021511       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:36:07.021586       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:36:07.021664       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:36:07.021710       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:36:07.120592       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:36:07.158564       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:36:07.199273       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:36:07.211433       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:07.256200       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:07.260949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="260.629259ms"
	I0916 10:36:07.261107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.998µs"
	I0916 10:36:07.627278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:36:09.540766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.500721ms"
	I0916 10:36:09.541443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.335µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:05.087142       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:05.094687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:36:05.094768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:05.128908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:05.128955       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:05.128978       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:05.131583       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:05.131810       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:05.131834       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:05.133708       1 config.go:199] "Starting service config controller"
	I0916 10:36:05.133764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:05.133809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:05.133827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:05.134323       1 config.go:328] "Starting node config controller"
	I0916 10:36:05.134353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:05.234169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:36:05.234184       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:05.234413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986] <==
	I0916 10:36:01.918697       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:36:03.635711       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:36:03.637927       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:36:03.638183       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:36:03.638223       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:36:03.699405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:36:03.699443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:03.708723       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:36:03.708883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:36:03.708916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:36:03.725362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:36:03.809763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:43.621150       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:43.621340       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:35:43.621677       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:35:43.622018       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:36:00 functional-553844 kubelet[4984]: I0916 10:36:00.884215    4984 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:36:00 functional-553844 kubelet[4984]: E0916 10:36:00.886255    4984 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 192.168.39.230:8441: connect: connection refused" node="functional-553844"
	Sep 16 10:36:00 functional-553844 kubelet[4984]: W0916 10:36:00.910601    4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	Sep 16 10:36:00 functional-553844 kubelet[4984]: E0916 10:36:00.910663    4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	Sep 16 10:36:01 functional-553844 kubelet[4984]: I0916 10:36:01.687815    4984 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.749683    4984 kubelet_node_status.go:111] "Node was previously registered" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.750196    4984 kubelet_node_status.go:75] "Successfully registered node" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: E0916 10:36:03.750257    4984 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-553844\": node \"functional-553844\" not found"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.752874    4984 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.753933    4984 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: E0916 10:36:03.767512    4984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"functional-553844\" not found"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.085150    4984 apiserver.go:52] "Watching apiserver"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.091164    4984 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-553844" podUID="7f3b5ce9-dbc7-45d3-8a46-1d51af0f5cac"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.105623    4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.124250    4984 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151243    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151300    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151318    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.189195    4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" path="/var/lib/kubelet/pods/0cf351cdb4e05fb19a16881fc8f9a8bc/volumes"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.191552    4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-553844" podStartSLOduration=0.19153653 podStartE2EDuration="191.53653ms" podCreationTimestamp="2024-09-16 10:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:36:04.191347015 +0000 UTC m=+4.208440213" watchObservedRunningTime="2024-09-16 10:36:04.19153653 +0000 UTC m=+4.208629709"
	Sep 16 10:36:09 functional-553844 kubelet[4984]: I0916 10:36:09.508237    4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177303    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177327    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.178991    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.179091    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	
	
	==> storage-provisioner [410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1] <==
	I0916 10:36:04.804572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:04.881510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:04.902536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:22.325954       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:22.326349       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	I0916 10:36:22.327877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700 became leader
	I0916 10:36:22.428646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:36:22.504523   18750 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (454.747µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/ComponentHealth (1.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-553844 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-553844 apply -f testdata/invalidsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (479.234µs)
functional_test.go:2323: kubectl --context functional-553844 apply -f testdata/invalidsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/InvalidService (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553844 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553844 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553844 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553844 --alsologtostderr -v=1] stderr:
I0916 10:36:34.548932   20871 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:34.549263   20871 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:34.549274   20871 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:34.549280   20871 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:34.549501   20871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:34.549730   20871 mustload.go:65] Loading cluster: functional-553844
I0916 10:36:34.550107   20871 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:34.550480   20871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:34.550522   20871 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:34.565155   20871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
I0916 10:36:34.565616   20871 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:34.566118   20871 main.go:141] libmachine: Using API Version  1
I0916 10:36:34.566132   20871 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:34.566444   20871 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:34.566573   20871 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:34.568021   20871 host.go:66] Checking if "functional-553844" exists ...
I0916 10:36:34.568342   20871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:34.568379   20871 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:34.583631   20871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
I0916 10:36:34.584091   20871 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:34.584639   20871 main.go:141] libmachine: Using API Version  1
I0916 10:36:34.584661   20871 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:34.584972   20871 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:34.585162   20871 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:34.585317   20871 api_server.go:166] Checking apiserver status ...
I0916 10:36:34.585365   20871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0916 10:36:34.585391   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:34.588490   20871 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:34.588879   20871 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:34.588903   20871 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:34.589028   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:34.589200   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:34.589352   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:34.589476   20871 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:34.689915   20871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5216/cgroup
W0916 10:36:34.699396   20871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5216/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0916 10:36:34.699457   20871 ssh_runner.go:195] Run: ls
I0916 10:36:34.705584   20871 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8441/healthz ...
I0916 10:36:34.710731   20871 api_server.go:279] https://192.168.39.230:8441/healthz returned 200:
ok
W0916 10:36:34.710765   20871 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0916 10:36:34.710919   20871 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:34.710933   20871 addons.go:69] Setting dashboard=true in profile "functional-553844"
I0916 10:36:34.710940   20871 addons.go:234] Setting addon dashboard=true in "functional-553844"
I0916 10:36:34.710962   20871 host.go:66] Checking if "functional-553844" exists ...
I0916 10:36:34.711217   20871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:34.711249   20871 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:34.727932   20871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
I0916 10:36:34.728363   20871 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:34.728834   20871 main.go:141] libmachine: Using API Version  1
I0916 10:36:34.728854   20871 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:34.729165   20871 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:34.729711   20871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:34.729750   20871 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:34.748054   20871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
I0916 10:36:34.748466   20871 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:34.749012   20871 main.go:141] libmachine: Using API Version  1
I0916 10:36:34.749034   20871 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:34.749372   20871 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:34.749565   20871 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:34.751344   20871 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:34.753166   20871 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0916 10:36:34.754435   20871 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0916 10:36:34.757819   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0916 10:36:34.757842   20871 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0916 10:36:34.757868   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:34.761044   20871 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:34.761691   20871 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:34.761715   20871 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:34.761895   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:34.762172   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:34.762285   20871 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:34.762669   20871 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:34.862613   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0916 10:36:34.862638   20871 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0916 10:36:34.881665   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0916 10:36:34.881689   20871 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0916 10:36:34.900289   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0916 10:36:34.900310   20871 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0916 10:36:34.919237   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0916 10:36:34.919262   20871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0916 10:36:34.953409   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0916 10:36:34.953431   20871 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0916 10:36:34.972189   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0916 10:36:34.972212   20871 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0916 10:36:35.002704   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0916 10:36:35.002732   20871 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0916 10:36:35.025332   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0916 10:36:35.025355   20871 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0916 10:36:35.045423   20871 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:36:35.045444   20871 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0916 10:36:35.074122   20871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:36:36.287824   20871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213648371s)
I0916 10:36:36.287892   20871 main.go:141] libmachine: Making call to close driver server
I0916 10:36:36.287913   20871 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:36.288174   20871 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:36.288197   20871 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:36.288204   20871 main.go:141] libmachine: Making call to close driver server
I0916 10:36:36.288211   20871 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:36.288422   20871 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
I0916 10:36:36.288474   20871 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:36.288490   20871 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:36.290278   20871 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-553844 addons enable metrics-server

                                                
                                                
I0916 10:36:36.291390   20871 addons.go:197] Writing out "functional-553844" config to set dashboard=true...
W0916 10:36:36.291603   20871 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0916 10:36:36.292315   20871 kapi.go:59] client config for functional-553844: &rest.Config{Host:"https://192.168.39.230:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0916 10:36:36.318739   20871 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  29fe4f2f-9f34-4e5c-b8d4-5b484a0c5b4a 648 0 2024-09-16 10:36:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-09-16 10:36:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.245.38,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.245.38],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0916 10:36:36.318916   20871 out.go:270] * Launching proxy ...
* Launching proxy ...
I0916 10:36:36.318989   20871 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-553844 proxy --port 36195]
I0916 10:36:36.321229   20871 out.go:201] 
W0916 10:36:36.322466   20871 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
W0916 10:36:36.322477   20871 out.go:270] * 
* 
W0916 10:36:36.324309   20871 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 10:36:36.325517   20871 out.go:201] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (2.718939335s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| image     | functional-553844 image load --daemon                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | -T /mount1                                                              |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | -T /mount2                                                              |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | -T /mount3                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --kill=true                                                             |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/11203.pem                                                |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /usr/share/ca-certificates/11203.pem                                    |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/51391683.0                                               |                   |         |         |                     |                     |
	| image     | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| start     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --dry-run --memory                                                      |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                 |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                           |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/112032.pem                                               |                   |         |         |                     |                     |
	| start     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --dry-run --alsologtostderr                                             |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                      |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                |                   |         |         |                     |                     |
	| image     | functional-553844 image load --daemon                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /usr/share/ca-certificates/112032.pem                                   |                   |         |         |                     |                     |
	| start     | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --dry-run --memory                                                      |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                 |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                           |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/test/nested/copy/11203/hosts                                       |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | -p functional-553844                                                    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| image     | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| image     | functional-553844 image load --daemon                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image     | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| image     | functional-553844 image save kicbase/echo-server:functional-553844      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                       |                   |         |         |                     |                     |
	|-----------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:36:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:36:34.139611   20738 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:34.139721   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.139734   20738 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:34.139739   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.140025   20738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:36:34.140533   20738 out.go:352] Setting JSON to false
	I0916 10:36:34.141585   20738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1144,"bootTime":1726481850,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:34.141651   20738 start.go:139] virtualization: kvm guest
	I0916 10:36:34.143781   20738 out.go:177] * [functional-553844] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:34.145184   20738 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:34.145216   20738 notify.go:220] Checking for updates...
	I0916 10:36:34.147692   20738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:34.148817   20738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:36:34.150295   20738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:36:34.151528   20738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:34.152665   20738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:34.154384   20738 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:36:34.155020   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.155078   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.171342   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0916 10:36:34.172227   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.172727   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.172781   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.173244   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.173423   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.173652   20738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:34.173932   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.173966   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.190306   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0916 10:36:34.190589   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.191111   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.191136   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.191610   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.191807   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.226275   20738 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 10:36:34.227529   20738 start.go:297] selected driver: kvm2
	I0916 10:36:34.227545   20738 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:34.227685   20738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:34.229775   20738 out.go:201] 
	W0916 10:36:34.231185   20738 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:36:34.232371   20738 out.go:201] 
	
	
	==> CRI-O <==
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.615315386Z" level=debug msg="Can't find docker.io/kicbase/echo-server:functional-553844" file="server/image_status.go:97" id=12f60cdd-f9c0-4d83-ae76-ccb3a6d18681 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.615359600Z" level=info msg="Image docker.io/kicbase/echo-server:functional-553844 not found" file="server/image_status.go:111" id=12f60cdd-f9c0-4d83-ae76-ccb3a6d18681 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.615397052Z" level=info msg="Image docker.io/kicbase/echo-server:functional-553844 not found" file="server/image_status.go:33" id=12f60cdd-f9c0-4d83-ae76-ccb3a6d18681 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.615441392Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=12f60cdd-f9c0-4d83-ae76-ccb3a6d18681 name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.639472356Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.652923293Z" level=debug msg="Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite" file="blobinfocache/default.go:74"
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.653312835Z" level=debug msg="Source is a manifest list; copying (only) instance sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 for current system" file="copy/copy.go:318"
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.653420220Z" level=debug msg="GET https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" file="docker/docker_client.go:631"
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.671235143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40ac7cb3-539f-4da9-8f40-999202c377d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.671758099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482997671728118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164944,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40ac7cb3-539f-4da9-8f40-999202c377d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.672406066Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:localhost/kicbase/echo-server:functional-553844,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=a7c2dfc8-c378-46f2-b5ee-007177b617ff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.672498518Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-553844" file="server/image_status.go:27" id=a7c2dfc8-c378-46f2-b5ee-007177b617ff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.672638420Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[localhost/kicbase/echo-server:functional-553844],RepoDigests:[localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf],Size_:4943877,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Pinned:false,},Info:map[string]string{},}" file="server/image_status.go:68" id=a7c2dfc8-c378-46f2-b5ee-007177b617ff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.672681944Z" level=debug msg="Response: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[localhost/kicbase/echo-server:functional-553844],RepoDigests:[localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf],Size_:4943877,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Pinned:false,},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a7c2dfc8-c378-46f2-b5ee-007177b617ff name=/runtime.v1.ImageService/ImageStatus
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.674485728Z" level=debug msg="Request: &RemoveImageRequest{Image:&ImageSpec{Image:localhost/kicbase/echo-server:functional-553844,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},}" file="otel-collector/interceptors.go:62" id=7b62b81a-4f09-4869-8ac2-8f870e5d72d3 name=/runtime.v1.ImageService/RemoveImage
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.681754953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90ee39dc-3d55-432a-90db-d4860e76f1ca name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.681815619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90ee39dc-3d55-432a-90db-d4860e76f1ca name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.683431770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24cf35f2-89c6-466b-904a-5e88a9e94d75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.683984383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482997683962343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24cf35f2-89c6-466b-904a-5e88a9e94d75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.684506104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fe7011e-5be4-4f13-950b-acc5d614ce56 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.684559220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fe7011e-5be4-4f13-950b-acc5d614ce56 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.684903008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fe7011e-5be4-4f13-950b-acc5d614ce56 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.690292933Z" level=debug msg="deleted image \"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"" file="storage/storage_reference.go:274"
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.690386806Z" level=debug msg="deleted layer \"385288f36387f526d4826ab7d5cf1ab0e58bb5684a8257e8d19d9da3773b85da\"" file="storage/storage_reference.go:276"
	Sep 16 10:36:37 functional-553844 crio[4747]: time="2024-09-16 10:36:37.690526989Z" level=debug msg="Response: &RemoveImageResponse{}" file="otel-collector/interceptors.go:74" id=7b62b81a-4f09-4869-8ac2-8f870e5d72d3 name=/runtime.v1.ImageService/RemoveImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11b04a7db7923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   32 seconds ago       Running             coredns                   2                   42c99506917bd       coredns-7c65d6cfc9-ntnpc
	f6cef4575c2c3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   33 seconds ago       Running             kube-proxy                2                   b5b2cd4351861       kube-proxy-8d5zp
	410bd23d1eb3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   33 seconds ago       Running             storage-provisioner       2                   66c3c1fc355f3       storage-provisioner
	281ad6489fa86       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   36 seconds ago       Running             kube-scheduler            3                   30d387489b797       kube-scheduler-functional-553844
	161c7c3a6dbc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   36 seconds ago       Running             etcd                      2                   1cf845fd98fb9       etcd-functional-553844
	c9f67c6f5bac2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   36 seconds ago       Running             kube-controller-manager   3                   7ff3b4db4c3a1       kube-controller-manager-functional-553844
	40e128caccd10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   37 seconds ago       Running             kube-apiserver            0                   4f30e9290df9f       kube-apiserver-functional-553844
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Exited              kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   b212b903ed97c       etcd-functional-553844
	
	
	==> coredns [11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 64894 "HINFO IN 1843759644485451532.7278217676100105798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028340041s
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m10s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m15s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-l9q92    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ss2vr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m8s               kube-proxy       
	  Normal  Starting                 33s                kube-proxy       
	  Normal  Starting                 99s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s              kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m15s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m15s              kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s              kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m14s              kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           2m11s              node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x7 over 91s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x7 over 38s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[ +21.316095] systemd-fstab-generator[4674]: Ignoring "noauto" option for root device
	[  +0.074178] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066789] systemd-fstab-generator[4686]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[4700]: Ignoring "noauto" option for root device
	[  +0.128627] systemd-fstab-generator[4712]: Ignoring "noauto" option for root device
	[  +0.261837] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +7.709349] systemd-fstab-generator[4854]: Ignoring "noauto" option for root device
	[  +0.074913] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.702685] systemd-fstab-generator[4977]: Ignoring "noauto" option for root device
	[Sep16 10:36] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.334379] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.139453] systemd-fstab-generator[5796]: Ignoring "noauto" option for root device
	[ +17.564540] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff] <==
	{"level":"info","ts":"2024-09-16T10:36:01.399752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:01.404521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:36:01.412218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:36:01.412273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:36:01.412554Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.412584Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.415172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:02.339007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.345885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:02.345893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.346138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.346171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.345925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.348114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:36:02.348659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:35:43.615417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:35:43.615457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:35:43.615668Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.615755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715379Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:35:43.716847Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:35:43.720365Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720475Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720485Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> kernel <==
	 10:36:38 up 2 min,  0 users,  load average: 1.54, 0.46, 0.16
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5] <==
	I0916 10:36:03.702192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:36:03.702197       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:36:03.704489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:36:03.704920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:36:03.704998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:36:03.705227       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:36:03.705335       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:36:03.705520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:36:03.709308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:36:03.709342       1 policy_source.go:224] refreshing policies
	I0916 10:36:03.714744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:36:03.724995       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:36:03.733976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:36:04.601449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:36:05.413610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:36:05.430933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:36:05.470801       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:36:05.494981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:36:05.501594       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:36:07.306638       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:36:07.353251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:36:35.784080       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:36:35.870442       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:36:36.214207       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.245.38"}
	I0916 10:36:36.266580       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.134.249"}
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:35:43.644862       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	I0916 10:35:41.634518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-controller-manager [c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8] <==
	I0916 10:36:07.687001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:36:09.540766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.500721ms"
	I0916 10:36:09.541443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.335µs"
	I0916 10:36:35.951981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="69.871865ms"
	E0916 10:36:35.952211       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:35.978927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.578864ms"
	E0916 10:36:35.978957       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:35.994958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="63.538461ms"
	E0916 10:36:35.994986       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.001891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.771682ms"
	E0916 10:36:36.001938       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.003346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.077188ms"
	E0916 10:36:36.003375       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.028226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.154625ms"
	E0916 10:36:36.028255       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.028309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.00784ms"
	E0916 10:36:36.028318       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.077252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="47.436681ms"
	I0916 10:36:36.085703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.571199ms"
	I0916 10:36:36.109418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.501919ms"
	I0916 10:36:36.109989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="54.429µs"
	I0916 10:36:36.132530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="133.25µs"
	I0916 10:36:36.174023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.044739ms"
	I0916 10:36:36.178490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="347.241µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:05.087142       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:05.094687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:36:05.094768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:05.128908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:05.128955       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:05.128978       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:05.131583       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:05.131810       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:05.131834       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:05.133708       1 config.go:199] "Starting service config controller"
	I0916 10:36:05.133764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:05.133809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:05.133827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:05.134323       1 config.go:328] "Starting node config controller"
	I0916 10:36:05.134353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:05.234169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:36:05.234184       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:05.234413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986] <==
	I0916 10:36:01.918697       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:36:03.635711       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:36:03.637927       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:36:03.638183       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:36:03.638223       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:36:03.699405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:36:03.699443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:03.708723       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:36:03.708883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:36:03.708916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:36:03.725362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:36:03.809763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:43.621150       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:43.621340       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:35:43.621677       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:35:43.622018       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.085150    4984 apiserver.go:52] "Watching apiserver"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.091164    4984 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-553844" podUID="7f3b5ce9-dbc7-45d3-8a46-1d51af0f5cac"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.105623    4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.124250    4984 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151243    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151300    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151318    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.189195    4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" path="/var/lib/kubelet/pods/0cf351cdb4e05fb19a16881fc8f9a8bc/volumes"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.191552    4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-553844" podStartSLOduration=0.19153653 podStartE2EDuration="191.53653ms" podCreationTimestamp="2024-09-16 10:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:36:04.191347015 +0000 UTC m=+4.208440213" watchObservedRunningTime="2024-09-16 10:36:04.19153653 +0000 UTC m=+4.208629709"
	Sep 16 10:36:09 functional-553844 kubelet[4984]: I0916 10:36:09.508237    4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177303    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177327    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.178991    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.179091    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.181981    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.182008    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: E0916 10:36:36.073326    4984 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: E0916 10:36:36.073353    4984 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.073377    4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.073385    4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177299    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7721211d-0edc-4c4d-bb09-a7f6dcba381b-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-l9q92\" (UID: \"7721211d-0edc-4c4d-bb09-a7f6dcba381b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l9q92"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177345    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswqm\" (UniqueName: \"kubernetes.io/projected/7721211d-0edc-4c4d-bb09-a7f6dcba381b-kube-api-access-sswqm\") pod \"dashboard-metrics-scraper-c5db448b4-l9q92\" (UID: \"7721211d-0edc-4c4d-bb09-a7f6dcba381b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l9q92"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177366    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9734fcc0-f3e2-4044-b5f0-5cbe19fdf261-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ss2vr\" (UID: \"9734fcc0-f3e2-4044-b5f0-5cbe19fdf261\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ss2vr"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177386    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnm8\" (UniqueName: \"kubernetes.io/projected/9734fcc0-f3e2-4044-b5f0-5cbe19fdf261-kube-api-access-xqnm8\") pod \"kubernetes-dashboard-695b96c756-ss2vr\" (UID: \"9734fcc0-f3e2-4044-b5f0-5cbe19fdf261\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ss2vr"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.299753    4984 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	
	
	==> storage-provisioner [410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1] <==
	I0916 10:36:04.804572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:04.881510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:04.902536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:22.325954       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:22.326349       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	I0916 10:36:22.327877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700 became leader
	I0916 10:36:22.428646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (485.788µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/DashboardCmd (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-553844 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-553844 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (445.855µs)
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-553844 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-553844 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-553844 describe po hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (421.497µs)
functional_test.go:1604: "kubectl --context functional-553844 describe po hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-553844 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-553844 logs -l app=hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (320.055µs)
functional_test.go:1610: "kubectl --context functional-553844 logs -l app=hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-553844 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-553844 describe svc hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (454.464µs)
functional_test.go:1616: "kubectl --context functional-553844 describe svc hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (1.870536166s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                Args                                |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| config  | functional-553844 config unset                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | cpus                                                               |                   |         |         |                     |                     |
	| cp      | functional-553844 cp                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | testdata/cp-test.txt                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |                   |         |         |                     |                     |
	| config  | functional-553844 config get                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | cpus                                                               |                   |         |         |                     |                     |
	| config  | functional-553844 config set                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | cpus 2                                                             |                   |         |         |                     |                     |
	| config  | functional-553844 config get                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | cpus                                                               |                   |         |         |                     |                     |
	| config  | functional-553844 config unset                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | cpus                                                               |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -n                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | functional-553844 sudo cat                                         |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |                   |         |         |                     |                     |
	| config  | functional-553844 config get                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | cpus                                                               |                   |         |         |                     |                     |
	| service | functional-553844 service list                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -o json                                                            |                   |         |         |                     |                     |
	| cp      | functional-553844 cp                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | functional-553844:/home/docker/cp-test.txt                         |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd1484377852/001/cp-test.txt         |                   |         |         |                     |                     |
	| service | functional-553844 service                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | --namespace=default --https                                        |                   |         |         |                     |                     |
	|         | --url hello-node                                                   |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -n                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | functional-553844 sudo cat                                         |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                           |                   |         |         |                     |                     |
	| service | functional-553844                                                  | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | service hello-node --url                                           |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                   |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh findmnt                                      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | -T /mount-9p | grep 9p                                             |                   |         |         |                     |                     |
	| cp      | functional-553844 cp                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | testdata/cp-test.txt                                               |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                    |                   |         |         |                     |                     |
	| mount   | -p functional-553844                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdany-port525922369/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                             |                   |         |         |                     |                     |
	| service | functional-553844 service                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | hello-node --url                                                   |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -n                                           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | functional-553844 sudo cat                                         |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                    |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh echo                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | hello                                                              |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | /etc/hostname                                                      |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh findmnt                                      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -T /mount-9p | grep 9p                                             |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -- ls                                        | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -la /mount-9p                                                      |                   |         |         |                     |                     |
	| addons  | functional-553844 addons list                                      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| addons  | functional-553844 addons list                                      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -o json                                                            |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | /mount-9p/test-1726482987895365534                                 |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:42.602736   18525 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:42.602961   18525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:42.602964   18525 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:42.602967   18525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:42.603134   18525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:35:42.603625   18525 out.go:352] Setting JSON to false
	I0916 10:35:42.604487   18525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1093,"bootTime":1726481850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:42.604573   18525 start.go:139] virtualization: kvm guest
	I0916 10:35:42.606812   18525 out.go:177] * [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:42.608453   18525 notify.go:220] Checking for updates...
	I0916 10:35:42.608460   18525 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:42.609720   18525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:42.610980   18525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:35:42.612026   18525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:35:42.613154   18525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:42.614469   18525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:42.616082   18525 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:42.616181   18525 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:42.616564   18525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:35:42.616592   18525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:35:42.631459   18525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0916 10:35:42.631931   18525 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:35:42.632471   18525 main.go:141] libmachine: Using API Version  1
	I0916 10:35:42.632493   18525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:35:42.632799   18525 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:35:42.632949   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.666224   18525 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:35:42.667731   18525 start.go:297] selected driver: kvm2
	I0916 10:35:42.667739   18525 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:42.667845   18525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:42.668158   18525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:35:42.668237   18525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:35:42.683577   18525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:35:42.684216   18525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:35:42.684245   18525 cni.go:84] Creating CNI manager for ""
	I0916 10:35:42.684291   18525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:35:42.684354   18525 start.go:340] cluster config:
	{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:42.684461   18525 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:35:42.686264   18525 out.go:177] * Starting "functional-553844" primary control-plane node in "functional-553844" cluster
	I0916 10:35:42.687758   18525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:35:42.687806   18525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:35:42.687813   18525 cache.go:56] Caching tarball of preloaded images
	I0916 10:35:42.687893   18525 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:35:42.687899   18525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:35:42.687986   18525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/config.json ...
	I0916 10:35:42.688155   18525 start.go:360] acquireMachinesLock for functional-553844: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:35:42.688216   18525 start.go:364] duration metric: took 49.309µs to acquireMachinesLock for "functional-553844"
	I0916 10:35:42.688231   18525 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:35:42.688235   18525 fix.go:54] fixHost starting: 
	I0916 10:35:42.688466   18525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:35:42.688492   18525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:35:42.703053   18525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0916 10:35:42.703530   18525 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:35:42.704035   18525 main.go:141] libmachine: Using API Version  1
	I0916 10:35:42.704064   18525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:35:42.704371   18525 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:35:42.704542   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.704677   18525 main.go:141] libmachine: (functional-553844) Calling .GetState
	I0916 10:35:42.706051   18525 fix.go:112] recreateIfNeeded on functional-553844: state=Running err=<nil>
	W0916 10:35:42.706062   18525 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:35:42.707728   18525 out.go:177] * Updating the running kvm2 "functional-553844" VM ...
	I0916 10:35:42.708861   18525 machine.go:93] provisionDockerMachine start ...
	I0916 10:35:42.708874   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.709063   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.711297   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.711619   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.711641   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.711812   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.711970   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.712095   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.712241   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.712367   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.712549   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.712554   18525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:35:42.822279   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:35:42.822297   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:42.822514   18525 buildroot.go:166] provisioning hostname "functional-553844"
	I0916 10:35:42.822541   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:42.822705   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.825390   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.825774   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.825794   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.825955   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.826114   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.826244   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.826444   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.826605   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.826756   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.826762   18525 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-553844 && echo "functional-553844" | sudo tee /etc/hostname
	I0916 10:35:42.947055   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:35:42.947086   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.949554   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.949872   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.949895   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.949977   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.950263   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.950397   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.950516   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.950660   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.950825   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.950834   18525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553844/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:35:43.057989   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:35:43.058009   18525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:35:43.058034   18525 buildroot.go:174] setting up certificates
	I0916 10:35:43.058041   18525 provision.go:84] configureAuth start
	I0916 10:35:43.058048   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:43.058310   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:43.060530   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.060834   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.060857   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.060950   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.063120   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.063409   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.063432   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.063485   18525 provision.go:143] copyHostCerts
	I0916 10:35:43.063549   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:35:43.063555   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:35:43.063615   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:35:43.063703   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:35:43.063707   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:35:43.063728   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:35:43.063790   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:35:43.063793   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:35:43.063811   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:35:43.063906   18525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.functional-553844 san=[127.0.0.1 192.168.39.230 functional-553844 localhost minikube]
	I0916 10:35:43.318125   18525 provision.go:177] copyRemoteCerts
	I0916 10:35:43.318179   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:35:43.318199   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.320675   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.320954   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.320979   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.321086   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:43.321278   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.321405   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:43.321526   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:43.408363   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:35:43.433926   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:35:43.459098   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:35:43.483570   18525 provision.go:87] duration metric: took 425.518643ms to configureAuth
	I0916 10:35:43.483586   18525 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:35:43.483776   18525 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:43.483836   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.486393   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.486676   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.486698   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.486844   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:43.487010   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.487138   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.487238   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:43.487355   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:43.487542   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:43.487551   18525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:35:49.077005   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:35:49.077018   18525 machine.go:96] duration metric: took 6.368149184s to provisionDockerMachine
	I0916 10:35:49.077029   18525 start.go:293] postStartSetup for "functional-553844" (driver="kvm2")
	I0916 10:35:49.077041   18525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:35:49.077060   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.077417   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:35:49.077437   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.080182   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.080466   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.080480   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.080612   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.080806   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.080943   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.081100   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.164278   18525 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:35:49.168341   18525 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:35:49.168356   18525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:35:49.168457   18525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:35:49.168550   18525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:35:49.168630   18525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> hosts in /etc/test/nested/copy/11203
	I0916 10:35:49.168671   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11203
	I0916 10:35:49.178688   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:35:49.203299   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts --> /etc/test/nested/copy/11203/hosts (40 bytes)
	I0916 10:35:49.227238   18525 start.go:296] duration metric: took 150.19355ms for postStartSetup
	I0916 10:35:49.227270   18525 fix.go:56] duration metric: took 6.5390335s for fixHost
	I0916 10:35:49.227292   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.229721   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.230084   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.230108   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.230254   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.230400   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.230525   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.230675   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.230824   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:49.230971   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:49.230975   18525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:35:49.337843   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482949.326826151
	
	I0916 10:35:49.337854   18525 fix.go:216] guest clock: 1726482949.326826151
	I0916 10:35:49.337863   18525 fix.go:229] Guest: 2024-09-16 10:35:49.326826151 +0000 UTC Remote: 2024-09-16 10:35:49.227273795 +0000 UTC m=+6.659405209 (delta=99.552356ms)
	I0916 10:35:49.337905   18525 fix.go:200] guest clock delta is within tolerance: 99.552356ms
	I0916 10:35:49.337909   18525 start.go:83] releasing machines lock for "functional-553844", held for 6.649688194s
	I0916 10:35:49.337930   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.338155   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:49.340737   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.341087   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.341111   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.341237   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341760   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341890   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341938   18525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:35:49.341973   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.342020   18525 ssh_runner.go:195] Run: cat /version.json
	I0916 10:35:49.342027   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.344444   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344803   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.344824   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344842   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344991   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.345141   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.345260   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.345273   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.345292   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.345448   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.345461   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.345608   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.345747   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.345877   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.443002   18525 ssh_runner.go:195] Run: systemctl --version
	I0916 10:35:49.449614   18525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:35:49.596269   18525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:35:49.602475   18525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:35:49.602526   18525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:35:49.611756   18525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:35:49.611766   18525 start.go:495] detecting cgroup driver to use...
	I0916 10:35:49.611824   18525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:35:49.628855   18525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:35:49.642697   18525 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:35:49.642752   18525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:35:49.656384   18525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:35:49.669903   18525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:35:49.802721   18525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:35:49.941918   18525 docker.go:233] disabling docker service ...
	I0916 10:35:49.941969   18525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:35:49.958790   18525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:35:49.973275   18525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:35:50.101548   18525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:35:50.229058   18525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:35:50.243779   18525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:35:50.264191   18525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:35:50.264234   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.274752   18525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:35:50.274787   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.285273   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.295681   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.306207   18525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:35:50.316754   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.326994   18525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.338261   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.348587   18525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:35:50.358102   18525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:35:50.367334   18525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:35:50.494296   18525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:35:57.749446   18525 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.255125663s)
	I0916 10:35:57.749465   18525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:35:57.749513   18525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:35:57.754558   18525 start.go:563] Will wait 60s for crictl version
	I0916 10:35:57.754608   18525 ssh_runner.go:195] Run: which crictl
	I0916 10:35:57.758591   18525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:35:57.797435   18525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:35:57.797514   18525 ssh_runner.go:195] Run: crio --version
	I0916 10:35:57.826212   18525 ssh_runner.go:195] Run: crio --version
	I0916 10:35:57.857475   18525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:35:57.858682   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:57.861189   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:57.861453   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:57.861474   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:57.861620   18525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:35:57.867598   18525 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:35:57.868983   18525 kubeadm.go:883] updating cluster {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:35:57.869107   18525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:35:57.869177   18525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:35:57.914399   18525 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:35:57.914408   18525 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:35:57.914450   18525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:35:57.949560   18525 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:35:57.949570   18525 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:35:57.949575   18525 kubeadm.go:934] updating node { 192.168.39.230 8441 v1.31.1 crio true true} ...
	I0916 10:35:57.949666   18525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:35:57.949729   18525 ssh_runner.go:195] Run: crio config
	I0916 10:35:57.995982   18525 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:35:57.996009   18525 cni.go:84] Creating CNI manager for ""
	I0916 10:35:57.996022   18525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:35:57.996030   18525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:35:57.996057   18525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553844 NodeName:functional-553844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:35:57.996174   18525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553844"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:35:57.996229   18525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:35:58.006808   18525 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:35:58.006895   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:35:58.016928   18525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:35:58.034395   18525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:35:58.051467   18525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2011 bytes)
	I0916 10:35:58.068995   18525 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0916 10:35:58.072954   18525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:35:58.201848   18525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:35:58.217243   18525 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844 for IP: 192.168.39.230
	I0916 10:35:58.217256   18525 certs.go:194] generating shared ca certs ...
	I0916 10:35:58.217271   18525 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:35:58.217440   18525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:35:58.217483   18525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:35:58.217490   18525 certs.go:256] generating profile certs ...
	I0916 10:35:58.217589   18525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.key
	I0916 10:35:58.217652   18525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key.7b9f73b3
	I0916 10:35:58.217696   18525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key
	I0916 10:35:58.217831   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:35:58.217868   18525 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:35:58.217877   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:35:58.217903   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:35:58.217930   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:35:58.217957   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:35:58.218005   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:35:58.218755   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:35:58.243657   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:35:58.267838   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:35:58.291555   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:35:58.315510   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:35:58.339081   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:35:58.362662   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:35:58.386270   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:35:58.410573   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:35:58.434749   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:35:58.459501   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:35:58.482757   18525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:35:58.499985   18525 ssh_runner.go:195] Run: openssl version
	I0916 10:35:58.505649   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:35:58.516720   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.521314   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.521366   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.527133   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:35:58.537092   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:35:58.548863   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.553739   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.553789   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.559937   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:35:58.570077   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:35:58.581619   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.586334   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.586385   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.592259   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:35:58.602417   18525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:35:58.607018   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:35:58.612758   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:35:58.618471   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:35:58.623983   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:35:58.629681   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:35:58.635363   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:35:58.640927   18525 kubeadm.go:392] StartCluster: {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:58.641024   18525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:35:58.641097   18525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:35:58.678179   18525 cri.go:89] found id: "c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030"
	I0916 10:35:58.678193   18525 cri.go:89] found id: "7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147"
	I0916 10:35:58.678197   18525 cri.go:89] found id: "a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12"
	I0916 10:35:58.678200   18525 cri.go:89] found id: "8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324"
	I0916 10:35:58.678203   18525 cri.go:89] found id: "11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02"
	I0916 10:35:58.678206   18525 cri.go:89] found id: "5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb"
	I0916 10:35:58.678209   18525 cri.go:89] found id: "dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a"
	I0916 10:35:58.678212   18525 cri.go:89] found id: "3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c"
	I0916 10:35:58.678214   18525 cri.go:89] found id: "a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539"
	I0916 10:35:58.678221   18525 cri.go:89] found id: "29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	I0916 10:35:58.678223   18525 cri.go:89] found id: "0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866"
	I0916 10:35:58.678224   18525 cri.go:89] found id: "e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621"
	I0916 10:35:58.678226   18525 cri.go:89] found id: "665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a"
	I0916 10:35:58.678228   18525 cri.go:89] found id: "84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515"
	I0916 10:35:58.678230   18525 cri.go:89] found id: ""
	I0916 10:35:58.678271   18525 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.842367082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bb51943-92d5-4bda-9488-df212ce86f17 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.842864559Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8f0c24e2-8827-41ac-a8b3-428f06b81307 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.843011765Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726482960873930273,StartedAt:1726482960950666822,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9a02ea4105f59739cf4b87fcb1443f22/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9a02ea4105f59739cf4b87fcb1443f22/containers/kube-apiserver/389a8f82,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Contai
nerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-functional-553844_9a02ea4105f59739cf4b87fcb1443f22/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8f0c24e2-8827-41ac-a8b3-428f06b81307 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.873335614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8de73e7c-6d8e-42f5-a64f-781ec67fcc03 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.873428367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8de73e7c-6d8e-42f5-a64f-781ec67fcc03 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.877593702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b48cb819-4594-4530-b72c-85bf0cb1b2a5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.878272207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482989878246198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b48cb819-4594-4530-b72c-85bf0cb1b2a5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.879230198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fe1cd8c-d533-40f6-879a-afdc419739e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.879285318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fe1cd8c-d533-40f6-879a-afdc419739e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.879555283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fe1cd8c-d533-40f6-879a-afdc419739e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.899963504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbc5bde9-42d9-44df-8cee-a0261630d314 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.900252183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbc5bde9-42d9-44df-8cee-a0261630d314 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.901152302Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0986f674-ec20-4df4-b8f2-6ab5beb0b921 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.901287681Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_EXITED,CreatedAt:1726482907929868195,StartedAt:1726482908015190193,FinishedAt:1726482943683101195,ExitCode:1,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{},LogPath:/var/log/pods/kube-system_kube-apiserver-functional-553844_0cf351cdb4e05fb19a16881fc8f9a8bc/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0986f674-ec20-4df4-b8f2-6ab5beb0b921 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.929537533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12855889-24d5-4554-b5ca-9742ea92c933 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.929634662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12855889-24d5-4554-b5ca-9742ea92c933 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.931261953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6e957e4-0f65-4ef7-97e7-6695b90fba5c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.931741522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482989931711701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6e957e4-0f65-4ef7-97e7-6695b90fba5c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.932290578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10c97af1-97d9-479d-b064-514b5df8215e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.932343806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10c97af1-97d9-479d-b064-514b5df8215e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.932626523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10c97af1-97d9-479d-b064-514b5df8215e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.955962626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba9f13dc-c195-4ff4-9d79-6feba3c0a8ed name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.956074179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba9f13dc-c195-4ff4-9d79-6feba3c0a8ed name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.956674378Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a223ce2f-6b6e-4c4e-9a4c-a794ecbc8ad5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:36:29 functional-553844 crio[4747]: time="2024-09-16 10:36:29.956794559Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_EXITED,CreatedAt:1726482907936178670,StartedAt:1726482908028126333,FinishedAt:1726482943671100837,ExitCode:2,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{},LogPath:/var/log/pods/kube-system_kube-controller-manager-functional-553844_0ba1ce2146f556353256cee766fb22aa/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a223ce2f-6b6e-4c4e-9a4c-a794ecbc8ad5 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11b04a7db7923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago       Running             coredns                   2                   42c99506917bd       coredns-7c65d6cfc9-ntnpc
	f6cef4575c2c3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   25 seconds ago       Running             kube-proxy                2                   b5b2cd4351861       kube-proxy-8d5zp
	410bd23d1eb3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   25 seconds ago       Running             storage-provisioner       2                   66c3c1fc355f3       storage-provisioner
	281ad6489fa86       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   29 seconds ago       Running             kube-scheduler            3                   30d387489b797       kube-scheduler-functional-553844
	161c7c3a6dbc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago       Running             etcd                      2                   1cf845fd98fb9       etcd-functional-553844
	c9f67c6f5bac2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   29 seconds ago       Running             kube-controller-manager   3                   7ff3b4db4c3a1       kube-controller-manager-functional-553844
	40e128caccd10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   29 seconds ago       Running             kube-apiserver            0                   4f30e9290df9f       kube-apiserver-functional-553844
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Exited              kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   b212b903ed97c       etcd-functional-553844
	
	
	==> coredns [11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 64894 "HINFO IN 1843759644485451532.7278217676100105798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028340041s
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m2s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m7s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m                 kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 91s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s               kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m7s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m7s               kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s               kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s               kubelet          Starting kubelet.
	  Normal  NodeReady                2m6s               kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           2m3s               node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           76s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[  +0.603762] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[ +21.316095] systemd-fstab-generator[4674]: Ignoring "noauto" option for root device
	[  +0.074178] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066789] systemd-fstab-generator[4686]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[4700]: Ignoring "noauto" option for root device
	[  +0.128627] systemd-fstab-generator[4712]: Ignoring "noauto" option for root device
	[  +0.261837] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +7.709349] systemd-fstab-generator[4854]: Ignoring "noauto" option for root device
	[  +0.074913] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.702685] systemd-fstab-generator[4977]: Ignoring "noauto" option for root device
	[Sep16 10:36] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.334379] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.139453] systemd-fstab-generator[5796]: Ignoring "noauto" option for root device
	
	
	==> etcd [161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff] <==
	{"level":"info","ts":"2024-09-16T10:36:01.399752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:01.404521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:36:01.412218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:36:01.412273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:36:01.412554Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.412584Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.415172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:02.339007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.345885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:02.345893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.346138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.346171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.345925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.348114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:36:02.348659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:35:43.615417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:35:43.615457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:35:43.615668Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.615755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715379Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:35:43.716847Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:35:43.720365Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720475Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720485Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> kernel <==
	 10:36:30 up 2 min,  0 users,  load average: 0.71, 0.27, 0.10
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5] <==
	I0916 10:36:03.700643       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:36:03.700962       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:36:03.702154       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:36:03.702186       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:36:03.702192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:36:03.702197       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:36:03.704489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:36:03.704920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:36:03.704998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:36:03.705227       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:36:03.705335       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:36:03.705520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:36:03.709308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:36:03.709342       1 policy_source.go:224] refreshing policies
	I0916 10:36:03.714744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:36:03.724995       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:36:03.733976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:36:04.601449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:36:05.413610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:36:05.430933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:36:05.470801       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:36:05.494981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:36:05.501594       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:36:07.306638       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:36:07.353251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:35:43.644862       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	I0916 10:35:41.634518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-controller-manager [c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8] <==
	I0916 10:36:07.006747       1 shared_informer.go:320] Caches are synced for deployment
	I0916 10:36:07.009845       1 shared_informer.go:320] Caches are synced for node
	I0916 10:36:07.009955       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 10:36:07.010006       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:36:07.010065       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:36:07.010073       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:36:07.010176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	I0916 10:36:07.017945       1 shared_informer.go:320] Caches are synced for namespace
	I0916 10:36:07.018019       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:36:07.021511       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:36:07.021586       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:36:07.021664       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:36:07.021710       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:36:07.120592       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:36:07.158564       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:36:07.199273       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:36:07.211433       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:07.256200       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:07.260949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="260.629259ms"
	I0916 10:36:07.261107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.998µs"
	I0916 10:36:07.627278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:36:09.540766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.500721ms"
	I0916 10:36:09.541443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.335µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:05.087142       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:05.094687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:36:05.094768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:05.128908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:05.128955       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:05.128978       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:05.131583       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:05.131810       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:05.131834       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:05.133708       1 config.go:199] "Starting service config controller"
	I0916 10:36:05.133764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:05.133809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:05.133827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:05.134323       1 config.go:328] "Starting node config controller"
	I0916 10:36:05.134353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:05.234169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:36:05.234184       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:05.234413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986] <==
	I0916 10:36:01.918697       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:36:03.635711       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:36:03.637927       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:36:03.638183       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:36:03.638223       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:36:03.699405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:36:03.699443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:03.708723       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:36:03.708883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:36:03.708916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:36:03.725362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:36:03.809763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:43.621150       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:43.621340       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:35:43.621677       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:35:43.622018       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:36:00 functional-553844 kubelet[4984]: W0916 10:36:00.910601    4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	Sep 16 10:36:00 functional-553844 kubelet[4984]: E0916 10:36:00.910663    4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	Sep 16 10:36:01 functional-553844 kubelet[4984]: I0916 10:36:01.687815    4984 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.749683    4984 kubelet_node_status.go:111] "Node was previously registered" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.750196    4984 kubelet_node_status.go:75] "Successfully registered node" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: E0916 10:36:03.750257    4984 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-553844\": node \"functional-553844\" not found"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.752874    4984 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.753933    4984 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: E0916 10:36:03.767512    4984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"functional-553844\" not found"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.085150    4984 apiserver.go:52] "Watching apiserver"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.091164    4984 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-553844" podUID="7f3b5ce9-dbc7-45d3-8a46-1d51af0f5cac"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.105623    4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.124250    4984 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151243    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151300    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151318    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.189195    4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" path="/var/lib/kubelet/pods/0cf351cdb4e05fb19a16881fc8f9a8bc/volumes"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.191552    4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-553844" podStartSLOduration=0.19153653 podStartE2EDuration="191.53653ms" podCreationTimestamp="2024-09-16 10:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:36:04.191347015 +0000 UTC m=+4.208440213" watchObservedRunningTime="2024-09-16 10:36:04.19153653 +0000 UTC m=+4.208629709"
	Sep 16 10:36:09 functional-553844 kubelet[4984]: I0916 10:36:09.508237    4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177303    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177327    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.178991    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.179091    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.181981    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.182008    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	
	
	==> storage-provisioner [410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1] <==
	I0916 10:36:04.804572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:04.881510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:04.902536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:22.325954       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:22.326349       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	I0916 10:36:22.327877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700 became leader
	I0916 10:36:22.428646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:36:29.297833   19815 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (495.236µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (103.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f41228d6-b7ff-4315-b9c5-05b5cc4d0acd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005048429s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (505.285µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (500.197µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (513.795µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (566.156µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (481.382µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (557.399µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (486.295µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (522.911µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (573.518µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (543.729µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (480.937µs)
E0916 10:37:52.681868   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-553844 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-553844 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (499.733µs)
functional_test_pvc_test.go:65: failed to check for storage class: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-553844 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:69: (dbg) Non-zero exit: kubectl --context functional-553844 apply -f testdata/storage-provisioner/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (377.171µs)
functional_test_pvc_test.go:71: kubectl apply pvc.yaml failed: args "kubectl --context functional-553844 apply -f testdata/storage-provisioner/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (1.454441534s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-553844 image load --daemon                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | /usr/share/ca-certificates/112032.pem                                   |                   |         |         |                     |                     |
	| start          | -p functional-553844                                                    | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|                | --dry-run --memory                                                      |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                 |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                           |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                |                   |         |         |                     |                     |
	| ssh            | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | /etc/test/nested/copy/11203/hosts                                       |                   |         |         |                     |                     |
	| ssh            | functional-553844 ssh sudo cat                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|                | -p functional-553844                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| image          | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| image          | functional-553844 image load --daemon                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| image          | functional-553844 image save kicbase/echo-server:functional-553844      | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844 image rm                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| image          | functional-553844 image load                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| image          | functional-553844 image save --daemon                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | kicbase/echo-server:functional-553844                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-553844 ssh pgrep                                             | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-553844                                                       | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-553844 image build -t                                        | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|                | localhost/my-image:functional-553844                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-553844 image ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:36:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:36:34.139611   20738 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:34.139721   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.139734   20738 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:34.139739   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.140025   20738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:36:34.140533   20738 out.go:352] Setting JSON to false
	I0916 10:36:34.141585   20738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1144,"bootTime":1726481850,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:34.141651   20738 start.go:139] virtualization: kvm guest
	I0916 10:36:34.143781   20738 out.go:177] * [functional-553844] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:34.145184   20738 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:34.145216   20738 notify.go:220] Checking for updates...
	I0916 10:36:34.147692   20738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:34.148817   20738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:36:34.150295   20738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:36:34.151528   20738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:34.152665   20738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:34.154384   20738 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:36:34.155020   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.155078   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.171342   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0916 10:36:34.172227   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.172727   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.172781   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.173244   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.173423   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.173652   20738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:34.173932   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.173966   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.190306   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0916 10:36:34.190589   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.191111   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.191136   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.191610   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.191807   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.226275   20738 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 10:36:34.227529   20738 start.go:297] selected driver: kvm2
	I0916 10:36:34.227545   20738 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:34.227685   20738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:34.229775   20738 out.go:201] 
	W0916 10:36:34.231185   20738 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:36:34.232371   20738 out.go:201] 
	
	
	==> CRI-O <==
	Sep 16 10:38:09 functional-553844 crio[4747]: time="2024-09-16 10:38:09.997408679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483089997381378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=748dd1ab-5c3e-47ee-9b23-5134df811bdd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:09 functional-553844 crio[4747]: time="2024-09-16 10:38:09.997918435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53c9237e-e8c7-4794-ab96-bacaafbc5b35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:09 functional-553844 crio[4747]: time="2024-09-16 10:38:09.997993805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53c9237e-e8c7-4794-ab96-bacaafbc5b35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:09 functional-553844 crio[4747]: time="2024-09-16 10:38:09.998329999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b13598b3e2d2933deb31266d0baa8253508898420512d31afa1daf24a537bca6,PodSandboxId:49d7ea6ee0cf8141509931ed2f97bdadb13475cfb5cd484b2c670e89b5105b6d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726483002958369483,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-ss2vr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9734fcc0-f3e2-4044-b5f0-5cbe19fdf261,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553
844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482
895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53c9237e-e8c7-4794-ab96-bacaafbc5b35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.042672140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89196834-9c6c-4ce7-8e46-fcb2437987d1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.042766320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89196834-9c6c-4ce7-8e46-fcb2437987d1 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.048366425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6f2df04-724b-4da0-b336-e1f9d250ac87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.049008029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483090048981051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6f2df04-724b-4da0-b336-e1f9d250ac87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.049846740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=336530b9-9f1a-4796-9e81-3b83c025b1ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.049933370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=336530b9-9f1a-4796-9e81-3b83c025b1ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.050829265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b13598b3e2d2933deb31266d0baa8253508898420512d31afa1daf24a537bca6,PodSandboxId:49d7ea6ee0cf8141509931ed2f97bdadb13475cfb5cd484b2c670e89b5105b6d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726483002958369483,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-ss2vr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9734fcc0-f3e2-4044-b5f0-5cbe19fdf261,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553
844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482
895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=336530b9-9f1a-4796-9e81-3b83c025b1ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.086670164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0002425a-0716-4d74-bb06-43cc41637beb name=/runtime.v1.RuntimeService/Version
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.086745807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0002425a-0716-4d74-bb06-43cc41637beb name=/runtime.v1.RuntimeService/Version
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.087953960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da79c983-ebd9-46f2-9996-e60c080c3451 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.088620270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483090088568256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da79c983-ebd9-46f2-9996-e60c080c3451 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.089179505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=821f1da2-dedf-4852-99dc-98531103e5c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.089234168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=821f1da2-dedf-4852-99dc-98531103e5c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.089511724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b13598b3e2d2933deb31266d0baa8253508898420512d31afa1daf24a537bca6,PodSandboxId:49d7ea6ee0cf8141509931ed2f97bdadb13475cfb5cd484b2c670e89b5105b6d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726483002958369483,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-ss2vr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9734fcc0-f3e2-4044-b5f0-5cbe19fdf261,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553
844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482
895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=821f1da2-dedf-4852-99dc-98531103e5c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.126769922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9684f9e8-f003-476a-88bc-7b5aaacef32d name=/runtime.v1.RuntimeService/Version
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.126865001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9684f9e8-f003-476a-88bc-7b5aaacef32d name=/runtime.v1.RuntimeService/Version
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.128175188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e61c5212-a077-4ea6-84a4-81eacdfdab30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.128786367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483090128764761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e61c5212-a077-4ea6-84a4-81eacdfdab30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.129271377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4746af5-46af-4fc0-b746-4d41fd849a28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.129325669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4746af5-46af-4fc0-b746-4d41fd849a28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:38:10 functional-553844 crio[4747]: time="2024-09-16 10:38:10.129649349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b13598b3e2d2933deb31266d0baa8253508898420512d31afa1daf24a537bca6,PodSandboxId:49d7ea6ee0cf8141509931ed2f97bdadb13475cfb5cd484b2c670e89b5105b6d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726483002958369483,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-ss2vr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9734fcc0-f3e2-4044-b5f0-5cbe19fdf261,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.n
ame: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553
844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-f
unctional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482
895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4746af5-46af-4fc0-b746-4d41fd849a28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b13598b3e2d29       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   About a minute ago   Running             kubernetes-dashboard      0                   49d7ea6ee0cf8       kubernetes-dashboard-695b96c756-ss2vr
	11b04a7db7923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           2 minutes ago        Running             coredns                   2                   42c99506917bd       coredns-7c65d6cfc9-ntnpc
	f6cef4575c2c3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           2 minutes ago        Running             kube-proxy                2                   b5b2cd4351861       kube-proxy-8d5zp
	410bd23d1eb3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           2 minutes ago        Running             storage-provisioner       2                   66c3c1fc355f3       storage-provisioner
	281ad6489fa86       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           2 minutes ago        Running             kube-scheduler            3                   30d387489b797       kube-scheduler-functional-553844
	161c7c3a6dbc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           2 minutes ago        Running             etcd                      2                   1cf845fd98fb9       etcd-functional-553844
	c9f67c6f5bac2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           2 minutes ago        Running             kube-controller-manager   3                   7ff3b4db4c3a1       kube-controller-manager-functional-553844
	40e128caccd10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                           2 minutes ago        Running             kube-apiserver            0                   4f30e9290df9f       kube-apiserver-functional-553844
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           3 minutes ago        Exited              kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           3 minutes ago        Exited              kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           3 minutes ago        Exited              coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           3 minutes ago        Exited              storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           3 minutes ago        Exited              kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           3 minutes ago        Exited              etcd                      1                   b212b903ed97c       etcd-functional-553844
	
	
	==> coredns [11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 64894 "HINFO IN 1843759644485451532.7278217676100105798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028340041s
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:38:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:37:04 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:37:04 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:37:04 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:37:04 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m42s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m47s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-l9q92    0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ss2vr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m47s                  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m47s                  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s                  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m46s                  kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           3m43s                  node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m3s)    kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m3s)    kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x7 over 3m3s)    kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s                  node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m3s                   node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[ +21.316095] systemd-fstab-generator[4674]: Ignoring "noauto" option for root device
	[  +0.074178] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066789] systemd-fstab-generator[4686]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[4700]: Ignoring "noauto" option for root device
	[  +0.128627] systemd-fstab-generator[4712]: Ignoring "noauto" option for root device
	[  +0.261837] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +7.709349] systemd-fstab-generator[4854]: Ignoring "noauto" option for root device
	[  +0.074913] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.702685] systemd-fstab-generator[4977]: Ignoring "noauto" option for root device
	[Sep16 10:36] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.334379] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.139453] systemd-fstab-generator[5796]: Ignoring "noauto" option for root device
	[ +17.564540] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.750562] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff] <==
	{"level":"info","ts":"2024-09-16T10:36:01.412554Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.412584Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.415172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:02.339007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.345885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:02.345893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.346138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.346171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.345925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.348114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:36:02.348659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:36:42.775707Z","caller":"traceutil/trace.go:171","msg":"trace[2137394333] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"313.223773ms","start":"2024-09-16T10:36:42.462263Z","end":"2024-09-16T10:36:42.775486Z","steps":["trace[2137394333] 'process raft request'  (duration: 313.054485ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:36:42.776255Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:36:42.462247Z","time spent":"313.517789ms","remote":"127.0.0.1:46460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:658 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T10:36:48.907183Z","caller":"traceutil/trace.go:171","msg":"trace[977307832] transaction","detail":"{read_only:false; response_revision:675; number_of_response:1; }","duration":"105.249423ms","start":"2024-09-16T10:36:48.801906Z","end":"2024-09-16T10:36:48.907155Z","steps":["trace[977307832] 'process raft request'  (duration: 105.050332ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:37:13.249890Z","caller":"traceutil/trace.go:171","msg":"trace[4883284] transaction","detail":"{read_only:false; response_revision:694; number_of_response:1; }","duration":"220.905832ms","start":"2024-09-16T10:37:13.028957Z","end":"2024-09-16T10:37:13.249863Z","steps":["trace[4883284] 'process raft request'  (duration: 220.780134ms)"],"step_count":1}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:35:43.615417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:35:43.615457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:35:43.615668Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.615755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715379Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:35:43.716847Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:35:43.720365Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720475Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720485Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> kernel <==
	 10:38:10 up 4 min,  0 users,  load average: 0.47, 0.39, 0.17
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5] <==
	I0916 10:36:03.702192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:36:03.702197       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:36:03.704489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:36:03.704920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:36:03.704998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:36:03.705227       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:36:03.705335       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:36:03.705520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:36:03.709308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:36:03.709342       1 policy_source.go:224] refreshing policies
	I0916 10:36:03.714744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:36:03.724995       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:36:03.733976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:36:04.601449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:36:05.413610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:36:05.430933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:36:05.470801       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:36:05.494981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:36:05.501594       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:36:07.306638       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:36:07.353251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:36:35.784080       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:36:35.870442       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:36:36.214207       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.245.38"}
	I0916 10:36:36.266580       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.134.249"}
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	I0916 10:35:41.634518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-controller-manager [c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8] <==
	I0916 10:36:09.541443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.335µs"
	I0916 10:36:35.951981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="69.871865ms"
	E0916 10:36:35.952211       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:35.978927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.578864ms"
	E0916 10:36:35.978957       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:35.994958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="63.538461ms"
	E0916 10:36:35.994986       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.001891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.771682ms"
	E0916 10:36:36.001938       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.003346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.077188ms"
	E0916 10:36:36.003375       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.028226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.154625ms"
	E0916 10:36:36.028255       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.028309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.00784ms"
	E0916 10:36:36.028318       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.077252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="47.436681ms"
	I0916 10:36:36.085703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.571199ms"
	I0916 10:36:36.109418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.501919ms"
	I0916 10:36:36.109989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="54.429µs"
	I0916 10:36:36.132530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="133.25µs"
	I0916 10:36:36.174023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.044739ms"
	I0916 10:36:36.178490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="347.241µs"
	I0916 10:36:43.546967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.668049ms"
	I0916 10:36:43.547422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="91.723µs"
	I0916 10:37:04.564970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:05.087142       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:05.094687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:36:05.094768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:05.128908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:05.128955       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:05.128978       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:05.131583       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:05.131810       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:05.131834       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:05.133708       1 config.go:199] "Starting service config controller"
	I0916 10:36:05.133764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:05.133809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:05.133827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:05.134323       1 config.go:328] "Starting node config controller"
	I0916 10:36:05.134353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:05.234169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:36:05.234184       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:05.234413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986] <==
	I0916 10:36:01.918697       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:36:03.635711       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:36:03.637927       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:36:03.638183       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:36:03.638223       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:36:03.699405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:36:03.699443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:03.708723       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:36:03.708883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:36:03.708916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:36:03.725362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:36:03.809763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:43.621150       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:43.621340       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:35:43.621677       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:35:43.622018       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:37:00 functional-553844 kubelet[4984]: E0916 10:37:00.206999    4984 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:37:00 functional-553844 kubelet[4984]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:37:00 functional-553844 kubelet[4984]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:37:00 functional-553844 kubelet[4984]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:37:00 functional-553844 kubelet[4984]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:37:00 functional-553844 kubelet[4984]: I0916 10:37:00.335500    4984 scope.go:117] "RemoveContainer" containerID="a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12"
	Sep 16 10:37:10 functional-553844 kubelet[4984]: E0916 10:37:10.191497    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483030190999326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:10 functional-553844 kubelet[4984]: E0916 10:37:10.191844    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483030190999326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:20 functional-553844 kubelet[4984]: E0916 10:37:20.194307    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483040193890552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:20 functional-553844 kubelet[4984]: E0916 10:37:20.194568    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483040193890552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:30 functional-553844 kubelet[4984]: E0916 10:37:30.196508    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483050195990853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:30 functional-553844 kubelet[4984]: E0916 10:37:30.196883    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483050195990853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:40 functional-553844 kubelet[4984]: E0916 10:37:40.199430    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483060198759959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:40 functional-553844 kubelet[4984]: E0916 10:37:40.199700    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483060198759959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:50 functional-553844 kubelet[4984]: E0916 10:37:50.202474    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483070201961495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:37:50 functional-553844 kubelet[4984]: E0916 10:37:50.202886    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483070201961495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:00 functional-553844 kubelet[4984]: E0916 10:38:00.200922    4984 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:38:00 functional-553844 kubelet[4984]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:38:00 functional-553844 kubelet[4984]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:38:00 functional-553844 kubelet[4984]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:38:00 functional-553844 kubelet[4984]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:38:00 functional-553844 kubelet[4984]: E0916 10:38:00.204911    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483080204543897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:00 functional-553844 kubelet[4984]: E0916 10:38:00.204990    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483080204543897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:10 functional-553844 kubelet[4984]: E0916 10:38:10.207086    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483090206418917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:38:10 functional-553844 kubelet[4984]: E0916 10:38:10.207283    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483090206418917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201078,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [b13598b3e2d2933deb31266d0baa8253508898420512d31afa1daf24a537bca6] <==
	2024/09/16 10:36:43 Starting overwatch
	2024/09/16 10:36:43 Using namespace: kubernetes-dashboard
	2024/09/16 10:36:43 Using in-cluster config to connect to apiserver
	2024/09/16 10:36:43 Using secret token for csrf signing
	2024/09/16 10:36:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:36:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:36:43 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:36:43 Generating JWE encryption key
	2024/09/16 10:36:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:36:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:36:43 Initializing JWE encryption key from synchronized object
	2024/09/16 10:36:43 Creating in-cluster Sidecar client
	2024/09/16 10:36:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 10:36:43 Serving insecurely on HTTP port: 9090
	2024/09/16 10:37:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 10:37:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	
	
	==> storage-provisioner [410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1] <==
	I0916 10:36:04.804572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:04.881510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:04.902536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:22.325954       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:22.326349       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	I0916 10:36:22.327877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700 became leader
	I0916 10:36:22.428646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (537.076µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (103.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-553844 replace --force -f testdata/mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context functional-553844 replace --force -f testdata/mysql.yaml: fork/exec /usr/local/bin/kubectl: exec format error (428.934µs)
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context functional-553844 replace --force -f testdata/mysql.yaml" failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (2.445000599s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|-----------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-553844 ssh sudo                                             | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | systemctl is-active docker                                             |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo                                             | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | systemctl is-active containerd                                         |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo                                             | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount1 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| image     | functional-553844 image load --daemon                                  | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | kicbase/echo-server:functional-553844                                  |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh findmnt                                          | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --kill=true                                                            |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/11203.pem                                               |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /usr/share/ca-certificates/11203.pem                                   |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/51391683.0                                              |                   |         |         |                     |                     |
	| image     | functional-553844 image ls                                             | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| start     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/112032.pem                                              |                   |         |         |                     |                     |
	| start     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --dry-run --alsologtostderr                                            |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                     |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| image     | functional-553844 image load --daemon                                  | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | kicbase/echo-server:functional-553844                                  |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /usr/share/ca-certificates/112032.pem                                  |                   |         |         |                     |                     |
	| start     | -p functional-553844                                                   | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | --dry-run --memory                                                     |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                               |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/test/nested/copy/11203/hosts                                      |                   |         |         |                     |                     |
	| ssh       | functional-553844 ssh sudo cat                                         | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|           | /etc/ssl/certs/3ec20f2e.0                                              |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|           | -p functional-553844                                                   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	|-----------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:36:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:36:34.139611   20738 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:34.139721   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.139734   20738 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:34.139739   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.140025   20738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:36:34.140533   20738 out.go:352] Setting JSON to false
	I0916 10:36:34.141585   20738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1144,"bootTime":1726481850,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:34.141651   20738 start.go:139] virtualization: kvm guest
	I0916 10:36:34.143781   20738 out.go:177] * [functional-553844] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:34.145184   20738 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:34.145216   20738 notify.go:220] Checking for updates...
	I0916 10:36:34.147692   20738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:34.148817   20738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:36:34.150295   20738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:36:34.151528   20738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:34.152665   20738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:34.154384   20738 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:36:34.155020   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.155078   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.171342   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0916 10:36:34.172227   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.172727   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.172781   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.173244   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.173423   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.173652   20738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:34.173932   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.173966   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.190306   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0916 10:36:34.190589   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.191111   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.191136   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.191610   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.191807   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.226275   20738 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 10:36:34.227529   20738 start.go:297] selected driver: kvm2
	I0916 10:36:34.227545   20738 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:34.227685   20738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:34.229775   20738 out.go:201] 
	W0916 10:36:34.231185   20738 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:36:34.232371   20738 out.go:201] 
	
	
	==> CRI-O <==
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.317176912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79dd647c-9956-4324-bb37-1ad2ea75c158 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.318651004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f70146e-7db4-46e5-ae34-983fcdba68de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.319382512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482995319355789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164944,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f70146e-7db4-46e5-ae34-983fcdba68de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.319950696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b88274ad-6cd7-4851-801f-3b9c8e5aa974 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.320026375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b88274ad-6cd7-4851-801f-3b9c8e5aa974 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.320359033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b88274ad-6cd7-4851-801f-3b9c8e5aa974 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.394938516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c0a6c6e-ca0d-40a0-ad2e-3828113505b8 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.395013097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c0a6c6e-ca0d-40a0-ad2e-3828113505b8 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.397160662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41a64d61-f8d8-4766-b932-c782bc58dbab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.397656965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482995397633642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164944,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41a64d61-f8d8-4766-b932-c782bc58dbab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.398271398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc8073e4-7a51-4b82-bd91-30fd1e8513a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.398355493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc8073e4-7a51-4b82-bd91-30fd1e8513a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.398749778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc8073e4-7a51-4b82-bd91-30fd1e8513a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.430619477Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=076195b1-e118-434d-a6c7-b196f95f0f73 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.431173443Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntnpc,Uid:a0dcfd13-b1bc-45ef-9800-c98d2063bd43,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482964503840880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:36:04.090113434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&PodSandboxMetadata{Name:kube-proxy-8d5zp,Uid:7709f753-5ea7-43c8-9573-107c8507e92b,Namespace:kube-system,At
tempt:2,},State:SANDBOX_READY,CreatedAt:1726482964421341057,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:36:04.090109714Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482964420348585,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5c
c4d0acd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T10:36:04.090112417Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-553844,Uid:0
ba1ce2146f556353256cee766fb22aa,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482960637729447,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0ba1ce2146f556353256cee766fb22aa,kubernetes.io/config.seen: 2024-09-16T10:36:00.085765544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&PodSandboxMetadata{Name:etcd-functional-553844,Uid:b392e106920b290edb060cbb3942770e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482960635566812,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: b392e106920b290edb060cbb3942770e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.230:2379,kubernetes.io/config.hash: b392e106920b290edb060cbb3942770e,kubernetes.io/config.seen: 2024-09-16T10:36:00.085760633Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-553844,Uid:8e9406d783b81f1f83bb9b03dd50757a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482960630161383,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8e9406d783b81f1f83bb9b03dd50757a,kubernetes.io/config.seen: 2024-09-16T10:36:00.085766
368Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-553844,Uid:9a02ea4105f59739cf4b87fcb1443f22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726482960628528424,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.230:8441,kubernetes.io/config.hash: 9a02ea4105f59739cf4b87fcb1443f22,kubernetes.io/config.seen: 2024-09-16T10:36:00.085764464Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntnpc,Uid:a0dcfd
13-b1bc-45ef-9800-c98d2063bd43,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894904308562,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:34:28.771785063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&PodSandboxMetadata{Name:etcd-functional-553844,Uid:b392e106920b290edb060cbb3942770e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894689442623,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,tier: control-plane,},Anno
tations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.230:2379,kubernetes.io/config.hash: b392e106920b290edb060cbb3942770e,kubernetes.io/config.seen: 2024-09-16T10:34:23.495742323Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-553844,Uid:0ba1ce2146f556353256cee766fb22aa,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894683673418,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0ba1ce2146f556353256cee766fb22aa,kubernetes.io/config.seen: 2024-09-16T10:34:23.495746981Z,kubernetes.io/config.source: file,},Runti
meHandler:,},&PodSandbox{Id:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894674601131,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPol
icy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T10:34:29.104676833Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-553844,Uid:0cf351cdb4e05fb19a16881fc8f9a8bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894607884652,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.16
8.39.230:8441,kubernetes.io/config.hash: 0cf351cdb4e05fb19a16881fc8f9a8bc,kubernetes.io/config.seen: 2024-09-16T10:34:23.495745832Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-553844,Uid:8e9406d783b81f1f83bb9b03dd50757a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894597172869,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8e9406d783b81f1f83bb9b03dd50757a,kubernetes.io/config.seen: 2024-09-16T10:34:23.495747781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&PodSandboxMeta
data{Name:kube-proxy-8d5zp,Uid:7709f753-5ea7-43c8-9573-107c8507e92b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1726482894551979571,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:34:28.495685336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=076195b1-e118-434d-a6c7-b196f95f0f73 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.432273474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc1ae35f-cd0e-4d3a-9556-6e1bbbf7c7fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.432369926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc1ae35f-cd0e-4d3a-9556-6e1bbbf7c7fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.432783550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc1ae35f-cd0e-4d3a-9556-6e1bbbf7c7fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.460770881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de05b86b-524f-4f70-8b81-081cd2847c40 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.460903929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de05b86b-524f-4f70-8b81-081cd2847c40 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.469753478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=830505b3-340b-4002-8dd8-3a027363e444 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.470608591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482995470569074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164944,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=830505b3-340b-4002-8dd8-3a027363e444 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.471253262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20425b0d-62aa-47b5-9681-706e695d1f35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.471334569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20425b0d-62aa-47b5-9681-706e695d1f35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:35 functional-553844 crio[4747]: time="2024-09-16 10:36:35.471630114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20425b0d-62aa-47b5-9681-706e695d1f35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11b04a7db7923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   30 seconds ago       Running             coredns                   2                   42c99506917bd       coredns-7c65d6cfc9-ntnpc
	f6cef4575c2c3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago       Running             kube-proxy                2                   b5b2cd4351861       kube-proxy-8d5zp
	410bd23d1eb3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago       Running             storage-provisioner       2                   66c3c1fc355f3       storage-provisioner
	281ad6489fa86       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   34 seconds ago       Running             kube-scheduler            3                   30d387489b797       kube-scheduler-functional-553844
	161c7c3a6dbc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   34 seconds ago       Running             etcd                      2                   1cf845fd98fb9       etcd-functional-553844
	c9f67c6f5bac2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   34 seconds ago       Running             kube-controller-manager   3                   7ff3b4db4c3a1       kube-controller-manager-functional-553844
	40e128caccd10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   34 seconds ago       Running             kube-apiserver            0                   4f30e9290df9f       kube-apiserver-functional-553844
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Exited              kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   b212b903ed97c       etcd-functional-553844
	
	
	==> coredns [11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 64894 "HINFO IN 1843759644485451532.7278217676100105798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028340041s
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m7s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m12s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m6s               kube-proxy       
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 97s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s              kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m12s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m12s              kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s              kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m12s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m11s              kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           2m8s               node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[  +0.603762] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[ +21.316095] systemd-fstab-generator[4674]: Ignoring "noauto" option for root device
	[  +0.074178] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066789] systemd-fstab-generator[4686]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[4700]: Ignoring "noauto" option for root device
	[  +0.128627] systemd-fstab-generator[4712]: Ignoring "noauto" option for root device
	[  +0.261837] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +7.709349] systemd-fstab-generator[4854]: Ignoring "noauto" option for root device
	[  +0.074913] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.702685] systemd-fstab-generator[4977]: Ignoring "noauto" option for root device
	[Sep16 10:36] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.334379] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.139453] systemd-fstab-generator[5796]: Ignoring "noauto" option for root device
	
	
	==> etcd [161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff] <==
	{"level":"info","ts":"2024-09-16T10:36:01.399752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:01.404521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:36:01.412218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:36:01.412273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:36:01.412554Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.412584Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.415172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:02.339007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.345885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:02.345893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.346138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.346171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.345925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.348114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:36:02.348659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:35:43.615417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:35:43.615457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:35:43.615668Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.615755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715379Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:35:43.716847Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:35:43.720365Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720475Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720485Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> kernel <==
	 10:36:36 up 2 min,  0 users,  load average: 0.97, 0.33, 0.12
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5] <==
	I0916 10:36:03.702192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:36:03.702197       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:36:03.704489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:36:03.704920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:36:03.704998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:36:03.705227       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:36:03.705335       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:36:03.705520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:36:03.709308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:36:03.709342       1 policy_source.go:224] refreshing policies
	I0916 10:36:03.714744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:36:03.724995       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:36:03.733976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:36:04.601449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:36:05.413610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:36:05.430933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:36:05.470801       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:36:05.494981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:36:05.501594       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:36:07.306638       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:36:07.353251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:36:35.784080       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:36:35.870442       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:36:36.214207       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.245.38"}
	I0916 10:36:36.266580       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.134.249"}
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:35:43.644862       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	I0916 10:35:41.634518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-controller-manager [c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8] <==
	I0916 10:36:07.687001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:36:09.540766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.500721ms"
	I0916 10:36:09.541443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.335µs"
	I0916 10:36:35.951981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="69.871865ms"
	E0916 10:36:35.952211       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:35.978927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.578864ms"
	E0916 10:36:35.978957       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:35.994958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="63.538461ms"
	E0916 10:36:35.994986       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.001891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.771682ms"
	E0916 10:36:36.001938       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.003346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.077188ms"
	E0916 10:36:36.003375       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.028226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.154625ms"
	E0916 10:36:36.028255       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.028309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.00784ms"
	E0916 10:36:36.028318       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:36:36.077252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="47.436681ms"
	I0916 10:36:36.085703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.571199ms"
	I0916 10:36:36.109418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.501919ms"
	I0916 10:36:36.109989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="54.429µs"
	I0916 10:36:36.132530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="133.25µs"
	I0916 10:36:36.174023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="96.044739ms"
	I0916 10:36:36.178490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="347.241µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:05.087142       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:05.094687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:36:05.094768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:05.128908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:05.128955       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:05.128978       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:05.131583       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:05.131810       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:05.131834       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:05.133708       1 config.go:199] "Starting service config controller"
	I0916 10:36:05.133764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:05.133809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:05.133827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:05.134323       1 config.go:328] "Starting node config controller"
	I0916 10:36:05.134353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:05.234169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:36:05.234184       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:05.234413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986] <==
	I0916 10:36:01.918697       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:36:03.635711       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:36:03.637927       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:36:03.638183       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:36:03.638223       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:36:03.699405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:36:03.699443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:03.708723       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:36:03.708883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:36:03.708916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:36:03.725362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:36:03.809763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:43.621150       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:43.621340       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:35:43.621677       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:35:43.622018       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.085150    4984 apiserver.go:52] "Watching apiserver"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.091164    4984 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-553844" podUID="7f3b5ce9-dbc7-45d3-8a46-1d51af0f5cac"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.105623    4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.124250    4984 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151243    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151300    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151318    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.189195    4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" path="/var/lib/kubelet/pods/0cf351cdb4e05fb19a16881fc8f9a8bc/volumes"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.191552    4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-553844" podStartSLOduration=0.19153653 podStartE2EDuration="191.53653ms" podCreationTimestamp="2024-09-16 10:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:36:04.191347015 +0000 UTC m=+4.208440213" watchObservedRunningTime="2024-09-16 10:36:04.19153653 +0000 UTC m=+4.208629709"
	Sep 16 10:36:09 functional-553844 kubelet[4984]: I0916 10:36:09.508237    4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177303    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177327    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.178991    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.179091    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.181981    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.182008    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: E0916 10:36:36.073326    4984 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: E0916 10:36:36.073353    4984 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.073377    4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.073385    4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" containerName="kube-apiserver"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177299    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7721211d-0edc-4c4d-bb09-a7f6dcba381b-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-l9q92\" (UID: \"7721211d-0edc-4c4d-bb09-a7f6dcba381b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l9q92"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177345    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswqm\" (UniqueName: \"kubernetes.io/projected/7721211d-0edc-4c4d-bb09-a7f6dcba381b-kube-api-access-sswqm\") pod \"dashboard-metrics-scraper-c5db448b4-l9q92\" (UID: \"7721211d-0edc-4c4d-bb09-a7f6dcba381b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l9q92"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177366    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9734fcc0-f3e2-4044-b5f0-5cbe19fdf261-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ss2vr\" (UID: \"9734fcc0-f3e2-4044-b5f0-5cbe19fdf261\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ss2vr"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.177386    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnm8\" (UniqueName: \"kubernetes.io/projected/9734fcc0-f3e2-4044-b5f0-5cbe19fdf261-kube-api-access-xqnm8\") pod \"kubernetes-dashboard-695b96c756-ss2vr\" (UID: \"9734fcc0-f3e2-4044-b5f0-5cbe19fdf261\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ss2vr"
	Sep 16 10:36:36 functional-553844 kubelet[4984]: I0916 10:36:36.299753    4984 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	
	
	==> storage-provisioner [410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1] <==
	I0916 10:36:04.804572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:04.881510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:04.902536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:22.325954       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:22.326349       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	I0916 10:36:22.327877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700 became leader
	I0916 10:36:22.428646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (484.586µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/MySQL (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-553844 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-553844 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": fork/exec /usr/local/bin/kubectl: exec format error (604.827µs)
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-553844 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553844 -n functional-553844
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs -n 25: (1.842214208s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-553844 service                                                | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | --namespace=default --https                                              |                   |         |         |                     |                     |
	|         | --url hello-node                                                         |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -n                                                 | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | functional-553844 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| service | functional-553844                                                        | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | service hello-node --url                                                 |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                         |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh findmnt                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| cp      | functional-553844 cp                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| mount   | -p functional-553844                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdany-port525922369/001:/mount-9p       |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| service | functional-553844 service                                                | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | hello-node --url                                                         |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -n                                                 | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | functional-553844 sudo cat                                               |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh echo                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | hello                                                                    |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh cat                                                | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | /etc/hostname                                                            |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh findmnt                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh -- ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| addons  | functional-553844 addons list                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| addons  | functional-553844 addons list                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -o json                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh cat                                                | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | /mount-9p/test-1726482987895365534                                       |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh mount |                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | grep 9p; ls -la /mount-9p; cat                                           |                   |         |         |                     |                     |
	|         | /mount-9p/pod-dates                                                      |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh sudo                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount   | -p functional-553844                                                     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port2665835366/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh findmnt                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh findmnt                                            | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| license |                                                                          | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| ssh     | functional-553844 ssh -- ls                                              | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh sudo                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh sudo                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | systemctl is-active containerd                                           |                   |         |         |                     |                     |
	| ssh     | functional-553844 ssh sudo                                               | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC |                     |
	|         | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:35:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:35:42.602736   18525 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:35:42.602961   18525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:42.602964   18525 out.go:358] Setting ErrFile to fd 2...
	I0916 10:35:42.602967   18525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:35:42.603134   18525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:35:42.603625   18525 out.go:352] Setting JSON to false
	I0916 10:35:42.604487   18525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1093,"bootTime":1726481850,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:35:42.604573   18525 start.go:139] virtualization: kvm guest
	I0916 10:35:42.606812   18525 out.go:177] * [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:35:42.608453   18525 notify.go:220] Checking for updates...
	I0916 10:35:42.608460   18525 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:35:42.609720   18525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:35:42.610980   18525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:35:42.612026   18525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:35:42.613154   18525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:35:42.614469   18525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:35:42.616082   18525 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:42.616181   18525 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:35:42.616564   18525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:35:42.616592   18525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:35:42.631459   18525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0916 10:35:42.631931   18525 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:35:42.632471   18525 main.go:141] libmachine: Using API Version  1
	I0916 10:35:42.632493   18525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:35:42.632799   18525 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:35:42.632949   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.666224   18525 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:35:42.667731   18525 start.go:297] selected driver: kvm2
	I0916 10:35:42.667739   18525 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:42.667845   18525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:35:42.668158   18525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:35:42.668237   18525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:35:42.683577   18525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:35:42.684216   18525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:35:42.684245   18525 cni.go:84] Creating CNI manager for ""
	I0916 10:35:42.684291   18525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:35:42.684354   18525 start.go:340] cluster config:
	{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:42.684461   18525 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:35:42.686264   18525 out.go:177] * Starting "functional-553844" primary control-plane node in "functional-553844" cluster
	I0916 10:35:42.687758   18525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:35:42.687806   18525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:35:42.687813   18525 cache.go:56] Caching tarball of preloaded images
	I0916 10:35:42.687893   18525 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:35:42.687899   18525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:35:42.687986   18525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/config.json ...
	I0916 10:35:42.688155   18525 start.go:360] acquireMachinesLock for functional-553844: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:35:42.688216   18525 start.go:364] duration metric: took 49.309µs to acquireMachinesLock for "functional-553844"
	I0916 10:35:42.688231   18525 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:35:42.688235   18525 fix.go:54] fixHost starting: 
	I0916 10:35:42.688466   18525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:35:42.688492   18525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:35:42.703053   18525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0916 10:35:42.703530   18525 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:35:42.704035   18525 main.go:141] libmachine: Using API Version  1
	I0916 10:35:42.704064   18525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:35:42.704371   18525 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:35:42.704542   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.704677   18525 main.go:141] libmachine: (functional-553844) Calling .GetState
	I0916 10:35:42.706051   18525 fix.go:112] recreateIfNeeded on functional-553844: state=Running err=<nil>
	W0916 10:35:42.706062   18525 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:35:42.707728   18525 out.go:177] * Updating the running kvm2 "functional-553844" VM ...
	I0916 10:35:42.708861   18525 machine.go:93] provisionDockerMachine start ...
	I0916 10:35:42.708874   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:42.709063   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.711297   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.711619   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.711641   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.711812   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.711970   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.712095   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.712241   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.712367   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.712549   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.712554   18525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:35:42.822279   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:35:42.822297   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:42.822514   18525 buildroot.go:166] provisioning hostname "functional-553844"
	I0916 10:35:42.822541   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:42.822705   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.825390   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.825774   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.825794   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.825955   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.826114   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.826244   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.826444   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.826605   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.826756   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.826762   18525 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-553844 && echo "functional-553844" | sudo tee /etc/hostname
	I0916 10:35:42.947055   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-553844
	
	I0916 10:35:42.947086   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:42.949554   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.949872   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:42.949895   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:42.949977   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:42.950263   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.950397   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:42.950516   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:42.950660   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:42.950825   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:42.950834   18525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553844/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:35:43.057989   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:35:43.058009   18525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:35:43.058034   18525 buildroot.go:174] setting up certificates
	I0916 10:35:43.058041   18525 provision.go:84] configureAuth start
	I0916 10:35:43.058048   18525 main.go:141] libmachine: (functional-553844) Calling .GetMachineName
	I0916 10:35:43.058310   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:43.060530   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.060834   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.060857   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.060950   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.063120   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.063409   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.063432   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.063485   18525 provision.go:143] copyHostCerts
	I0916 10:35:43.063549   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:35:43.063555   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:35:43.063615   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:35:43.063703   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:35:43.063707   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:35:43.063728   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:35:43.063790   18525 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:35:43.063793   18525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:35:43.063811   18525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:35:43.063906   18525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.functional-553844 san=[127.0.0.1 192.168.39.230 functional-553844 localhost minikube]
	I0916 10:35:43.318125   18525 provision.go:177] copyRemoteCerts
	I0916 10:35:43.318179   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:35:43.318199   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.320675   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.320954   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.320979   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.321086   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:43.321278   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.321405   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:43.321526   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:43.408363   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:35:43.433926   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:35:43.459098   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:35:43.483570   18525 provision.go:87] duration metric: took 425.518643ms to configureAuth
	I0916 10:35:43.483586   18525 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:35:43.483776   18525 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:35:43.483836   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:43.486393   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.486676   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:43.486698   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:43.486844   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:43.487010   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.487138   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:43.487238   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:43.487355   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:43.487542   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:43.487551   18525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:35:49.077005   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:35:49.077018   18525 machine.go:96] duration metric: took 6.368149184s to provisionDockerMachine
	I0916 10:35:49.077029   18525 start.go:293] postStartSetup for "functional-553844" (driver="kvm2")
	I0916 10:35:49.077041   18525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:35:49.077060   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.077417   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:35:49.077437   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.080182   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.080466   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.080480   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.080612   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.080806   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.080943   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.081100   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.164278   18525 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:35:49.168341   18525 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:35:49.168356   18525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:35:49.168457   18525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:35:49.168550   18525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:35:49.168630   18525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts -> hosts in /etc/test/nested/copy/11203
	I0916 10:35:49.168671   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11203
	I0916 10:35:49.178688   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:35:49.203299   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts --> /etc/test/nested/copy/11203/hosts (40 bytes)
	I0916 10:35:49.227238   18525 start.go:296] duration metric: took 150.19355ms for postStartSetup
	I0916 10:35:49.227270   18525 fix.go:56] duration metric: took 6.5390335s for fixHost
	I0916 10:35:49.227292   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.229721   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.230084   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.230108   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.230254   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.230400   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.230525   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.230675   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.230824   18525 main.go:141] libmachine: Using SSH client type: native
	I0916 10:35:49.230971   18525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0916 10:35:49.230975   18525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:35:49.337843   18525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482949.326826151
	
	I0916 10:35:49.337854   18525 fix.go:216] guest clock: 1726482949.326826151
	I0916 10:35:49.337863   18525 fix.go:229] Guest: 2024-09-16 10:35:49.326826151 +0000 UTC Remote: 2024-09-16 10:35:49.227273795 +0000 UTC m=+6.659405209 (delta=99.552356ms)
	I0916 10:35:49.337905   18525 fix.go:200] guest clock delta is within tolerance: 99.552356ms
	I0916 10:35:49.337909   18525 start.go:83] releasing machines lock for "functional-553844", held for 6.649688194s
	I0916 10:35:49.337930   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.338155   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:49.340737   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.341087   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.341111   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.341237   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341760   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341890   18525 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:35:49.341938   18525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:35:49.341973   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.342020   18525 ssh_runner.go:195] Run: cat /version.json
	I0916 10:35:49.342027   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
	I0916 10:35:49.344444   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344803   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.344824   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344842   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.344991   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.345141   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.345260   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:49.345273   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:49.345292   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.345448   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
	I0916 10:35:49.345461   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.345608   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
	I0916 10:35:49.345747   18525 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
	I0916 10:35:49.345877   18525 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
	I0916 10:35:49.443002   18525 ssh_runner.go:195] Run: systemctl --version
	I0916 10:35:49.449614   18525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:35:49.596269   18525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:35:49.602475   18525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:35:49.602526   18525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:35:49.611756   18525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:35:49.611766   18525 start.go:495] detecting cgroup driver to use...
	I0916 10:35:49.611824   18525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:35:49.628855   18525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:35:49.642697   18525 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:35:49.642752   18525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:35:49.656384   18525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:35:49.669903   18525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:35:49.802721   18525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:35:49.941918   18525 docker.go:233] disabling docker service ...
	I0916 10:35:49.941969   18525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:35:49.958790   18525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:35:49.973275   18525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:35:50.101548   18525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:35:50.229058   18525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:35:50.243779   18525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:35:50.264191   18525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:35:50.264234   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.274752   18525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:35:50.274787   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.285273   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.295681   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.306207   18525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:35:50.316754   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.326994   18525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.338261   18525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:35:50.348587   18525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:35:50.358102   18525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:35:50.367334   18525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:35:50.494296   18525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:35:57.749446   18525 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.255125663s)
	I0916 10:35:57.749465   18525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:35:57.749513   18525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:35:57.754558   18525 start.go:563] Will wait 60s for crictl version
	I0916 10:35:57.754608   18525 ssh_runner.go:195] Run: which crictl
	I0916 10:35:57.758591   18525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:35:57.797435   18525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:35:57.797514   18525 ssh_runner.go:195] Run: crio --version
	I0916 10:35:57.826212   18525 ssh_runner.go:195] Run: crio --version
	I0916 10:35:57.857475   18525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:35:57.858682   18525 main.go:141] libmachine: (functional-553844) Calling .GetIP
	I0916 10:35:57.861189   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:57.861453   18525 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
	I0916 10:35:57.861474   18525 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
	I0916 10:35:57.861620   18525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:35:57.867598   18525 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:35:57.868983   18525 kubeadm.go:883] updating cluster {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:35:57.869107   18525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:35:57.869177   18525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:35:57.914399   18525 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:35:57.914408   18525 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:35:57.914450   18525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:35:57.949560   18525 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:35:57.949570   18525 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:35:57.949575   18525 kubeadm.go:934] updating node { 192.168.39.230 8441 v1.31.1 crio true true} ...
	I0916 10:35:57.949666   18525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:35:57.949729   18525 ssh_runner.go:195] Run: crio config
	I0916 10:35:57.995982   18525 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:35:57.996009   18525 cni.go:84] Creating CNI manager for ""
	I0916 10:35:57.996022   18525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:35:57.996030   18525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:35:57.996057   18525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553844 NodeName:functional-553844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:35:57.996174   18525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553844"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:35:57.996229   18525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:35:58.006808   18525 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:35:58.006895   18525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:35:58.016928   18525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:35:58.034395   18525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:35:58.051467   18525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2011 bytes)
	I0916 10:35:58.068995   18525 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0916 10:35:58.072954   18525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:35:58.201848   18525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:35:58.217243   18525 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844 for IP: 192.168.39.230
	I0916 10:35:58.217256   18525 certs.go:194] generating shared ca certs ...
	I0916 10:35:58.217271   18525 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:35:58.217440   18525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:35:58.217483   18525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:35:58.217490   18525 certs.go:256] generating profile certs ...
	I0916 10:35:58.217589   18525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.key
	I0916 10:35:58.217652   18525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key.7b9f73b3
	I0916 10:35:58.217696   18525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key
	I0916 10:35:58.217831   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:35:58.217868   18525 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:35:58.217877   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:35:58.217903   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:35:58.217930   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:35:58.217957   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:35:58.218005   18525 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:35:58.218755   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:35:58.243657   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:35:58.267838   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:35:58.291555   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:35:58.315510   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:35:58.339081   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:35:58.362662   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:35:58.386270   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:35:58.410573   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:35:58.434749   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:35:58.459501   18525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:35:58.482757   18525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:35:58.499985   18525 ssh_runner.go:195] Run: openssl version
	I0916 10:35:58.505649   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:35:58.516720   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.521314   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.521366   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:35:58.527133   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:35:58.537092   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:35:58.548863   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.553739   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.553789   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:35:58.559937   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:35:58.570077   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:35:58.581619   18525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.586334   18525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.586385   18525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:35:58.592259   18525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:35:58.602417   18525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:35:58.607018   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:35:58.612758   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:35:58.618471   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:35:58.623983   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:35:58.629681   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:35:58.635363   18525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:35:58.640927   18525 kubeadm.go:392] StartCluster: {Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:35:58.641024   18525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:35:58.641097   18525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:35:58.678179   18525 cri.go:89] found id: "c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030"
	I0916 10:35:58.678193   18525 cri.go:89] found id: "7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147"
	I0916 10:35:58.678197   18525 cri.go:89] found id: "a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12"
	I0916 10:35:58.678200   18525 cri.go:89] found id: "8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324"
	I0916 10:35:58.678203   18525 cri.go:89] found id: "11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02"
	I0916 10:35:58.678206   18525 cri.go:89] found id: "5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb"
	I0916 10:35:58.678209   18525 cri.go:89] found id: "dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a"
	I0916 10:35:58.678212   18525 cri.go:89] found id: "3e06948fb7d78a484090a08d9f88a0d72c4998279675de0ea7e60d51401d789c"
	I0916 10:35:58.678214   18525 cri.go:89] found id: "a3fe318aca7e7a437de0e19d52ce02314908224735a91ac782c8f4ee933b9539"
	I0916 10:35:58.678221   18525 cri.go:89] found id: "29f56fdf2e13c8c028dc80b03b1db0ee8da7289244b3368f9a6e8716db213d1e"
	I0916 10:35:58.678223   18525 cri.go:89] found id: "0718da2983026c5d88757cc81f8d9db82763d8eaec64c8089b706fcdc15d2866"
	I0916 10:35:58.678224   18525 cri.go:89] found id: "e2067f72690f60f2753b6b33ceaae6c8647431c4c0a6055c0943dffa6a611621"
	I0916 10:35:58.678226   18525 cri.go:89] found id: "665e5ce6ab7a5a79da4635071094125712865b62fcd581daf2db7fff5bafce8a"
	I0916 10:35:58.678228   18525 cri.go:89] found id: "84edb04959b2d1ddce7d4879036071b77ee39eb9f0e5e90edc6fbb843efd2515"
	I0916 10:35:58.678230   18525 cri.go:89] found id: ""
	I0916 10:35:58.678271   18525 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.188410231Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntnpc,Uid:a0dcfd13-b1bc-45ef-9800-c98d2063bd43,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482964503840880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:36:04.090113434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&PodSandboxMetadata{Name:kube-proxy-8d5zp,Uid:7709f753-5ea7-43c8-9573-107c8507e92b,Namespace:kube-system,At
tempt:2,},State:SANDBOX_READY,CreatedAt:1726482964421341057,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:36:04.090109714Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482964420348585,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5c
c4d0acd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T10:36:04.090112417Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-553844,Uid:0
ba1ce2146f556353256cee766fb22aa,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482960637729447,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0ba1ce2146f556353256cee766fb22aa,kubernetes.io/config.seen: 2024-09-16T10:36:00.085765544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&PodSandboxMetadata{Name:etcd-functional-553844,Uid:b392e106920b290edb060cbb3942770e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482960635566812,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: b392e106920b290edb060cbb3942770e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.230:2379,kubernetes.io/config.hash: b392e106920b290edb060cbb3942770e,kubernetes.io/config.seen: 2024-09-16T10:36:00.085760633Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-553844,Uid:8e9406d783b81f1f83bb9b03dd50757a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726482960630161383,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8e9406d783b81f1f83bb9b03dd50757a,kubernetes.io/config.seen: 2024-09-16T10:36:00.085766
368Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-553844,Uid:9a02ea4105f59739cf4b87fcb1443f22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726482960628528424,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.230:8441,kubernetes.io/config.hash: 9a02ea4105f59739cf4b87fcb1443f22,kubernetes.io/config.seen: 2024-09-16T10:36:00.085764464Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=71c4740b-320a-43fa-a6a8-7ed09fd42653 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.188969204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de73b7f4-cb18-45a5-9186-4ae6705125f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.189093281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de73b7f4-cb18-45a5-9186-4ae6705125f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.189254631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de73b7f4-cb18-45a5-9186-4ae6705125f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.218789115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01462e3e-33c1-40db-b828-fef1bcab5fc2 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.219133746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01462e3e-33c1-40db-b828-fef1bcab5fc2 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.220205227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1079c2c9-ec34-404e-93f2-bd3170cede02 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.220709280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482992220688370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1079c2c9-ec34-404e-93f2-bd3170cede02 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.221540000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c6e571e-bf33-4c3e-91e0-4b2fa0430f7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.221610658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c6e571e-bf33-4c3e-91e0-4b2fa0430f7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.221933249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c6e571e-bf33-4c3e-91e0-4b2fa0430f7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.274272912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=845fadbc-917e-4983-b385-326c6be00310 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.274379548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=845fadbc-917e-4983-b385-326c6be00310 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.276424967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=143a3d08-af2a-4780-9eb3-7d436d53701d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.277195398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482992277161576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=143a3d08-af2a-4780-9eb3-7d436d53701d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.278108272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa6a1f5c-38c9-4ed8-b946-338efb6ee578 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.278205532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa6a1f5c-38c9-4ed8-b946-338efb6ee578 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.278532914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa6a1f5c-38c9-4ed8-b946-338efb6ee578 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.320776931Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b62dff6f-c880-46bf-a15c-5a3ad3941107 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.320871742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b62dff6f-c880-46bf-a15c-5a3ad3941107 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.323192164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=924bbcf6-5a0b-48a1-964c-c0a296ffcd2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.323660894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482992323634115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=924bbcf6-5a0b-48a1-964c-c0a296ffcd2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.324296975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=581d2c39-eb32-4ce5-9c39-dffd8ab939a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.324353999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=581d2c39-eb32-4ce5-9c39-dffd8ab939a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:36:32 functional-553844 crio[4747]: time="2024-09-16 10:36:32.324654220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a,PodSandboxId:42c99506917bdb547de481b6b2b3da32439a0a4396f956f8bb3b1dcb33e0d1e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726482964931520126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87,PodSandboxId:b5b2cd43518619d818dfcf93e0e524dd8b0e680615d574fe46c92af79a3e1a44,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726482964645653251,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1,PodSandboxId:66c3c1fc355f337c0301a8a61ebd1b9a264940d27de9e5ab8ce3e6d9ff23f695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726482964572647163,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41
228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff,PodSandboxId:1cf845fd98fb9cec148aebce7960a866cf0aa9de0bbccc2479d3f8356e0402a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726482960901308832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986,PodSandboxId:30d387489b797f7f610fee5a80ba390e392d599bf0fcae9adff2ab82e5282aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726482960907328332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8,PodSandboxId:7ff3b4db4c3a1ec5a815fbd8fb33ce885ada6ab25ca592e100af16decad6e364,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726482960864170254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annota
tions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5,PodSandboxId:4f30e9290df9fed7dc156600cc22a14550577fc179ba762933d7ddb98b54f18d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726482960795838086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a02ea4105f59739cf4b87fcb1443f22,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030,PodSandboxId:224c8313d2a4b81f7902acab9a95c4674bb885601e7d6c432c194d4704f68448,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726482907910102968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9406d783b81f1f83bb9b03dd50757a,},Annotations:map[string]string{io.
kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147,PodSandboxId:786e02c9f268ffa4631ec0765f6e369ee8084dbb9f6f0ceb206328ad1483a95b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726482907861534544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ba1ce2146f556353256cee766fb22aa,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12,PodSandboxId:f630bd7b31a9986a64781d12465cbfb94677fffd47ee676d73cb45631f7bb0b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726482907858407120,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf351cdb4e05fb19a16881fc8f9a8bc,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb,PodSandboxId:795a8e1b509b3194f41662edcf7b963b352e10b9d3a0b2b7bc09bb8c879e6c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726482895162476253,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8d5zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7709f753-5ea7-43c8-9573-107c8507e92b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02,PodSandboxId:f234b24619f341b5993ad8541868405403fc798ed93fed40f3108616a4659944,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726482895179848533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41228d6-b7ff-4315-b9c5-05b5cc4d0acd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324,PodSandboxId:5de6db3341a3523f45534d3d25f38a36a50d8a1f094c79ff3c7afffbde2686bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726482895774247566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntnpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0dcfd13-b1bc-45ef-9800-c98d2063bd43,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"
dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a,PodSandboxId:b212b903ed97c57b27e7215769435327df006237ec57bc4233e178bfb5d746f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726482895106900933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553844,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b392e106920b290edb060cbb3942770e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=581d2c39-eb32-4ce5-9c39-dffd8ab939a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11b04a7db7923       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   27 seconds ago       Running             coredns                   2                   42c99506917bd       coredns-7c65d6cfc9-ntnpc
	f6cef4575c2c3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   27 seconds ago       Running             kube-proxy                2                   b5b2cd4351861       kube-proxy-8d5zp
	410bd23d1eb3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   27 seconds ago       Running             storage-provisioner       2                   66c3c1fc355f3       storage-provisioner
	281ad6489fa86       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   31 seconds ago       Running             kube-scheduler            3                   30d387489b797       kube-scheduler-functional-553844
	161c7c3a6dbc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   31 seconds ago       Running             etcd                      2                   1cf845fd98fb9       etcd-functional-553844
	c9f67c6f5bac2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   31 seconds ago       Running             kube-controller-manager   3                   7ff3b4db4c3a1       kube-controller-manager-functional-553844
	40e128caccd10       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   31 seconds ago       Running             kube-apiserver            0                   4f30e9290df9f       kube-apiserver-functional-553844
	c9566037419fa       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            2                   224c8313d2a4b       kube-scheduler-functional-553844
	7b4648b5566f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Exited              kube-controller-manager   2                   786e02c9f268f       kube-controller-manager-functional-553844
	a8a2455326fe0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Exited              kube-apiserver            2                   f630bd7b31a99       kube-apiserver-functional-553844
	8addedc5b3b72       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   5de6db3341a35       coredns-7c65d6cfc9-ntnpc
	11c7df787d684       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   f234b24619f34       storage-provisioner
	5ef8ee89662fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                1                   795a8e1b509b3       kube-proxy-8d5zp
	dda8bc32e425e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   b212b903ed97c       etcd-functional-553844
	
	
	==> coredns [11b04a7db79238496dd7e10a87ed4228200fc6950497535ac76465928dced22a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34318 - 64894 "HINFO IN 1843759644485451532.7278217676100105798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028340041s
	
	
	==> coredns [8addedc5b3b7251bda1891ec03e7b558acf7e48a679719cc2d0eb9af89051324] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49303 - 36766 "HINFO IN 7792431763943854020.5109512536554140100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-553844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-553844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_34_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:34:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553844
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:36:03 +0000   Mon, 16 Sep 2024 10:34:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    functional-553844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e02954b5bf404845959584edf15b4c70
	  System UUID:                e02954b5-bf40-4845-9595-84edf15b4c70
	  Boot ID:                    f32c4525-4b20-48f0-8997-63a4d85e0a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ntnpc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m4s
	  kube-system                 etcd-functional-553844                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m9s
	  kube-system                 kube-apiserver-functional-553844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-functional-553844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-8d5zp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-functional-553844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m3s               kube-proxy       
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 94s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s               kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m9s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m9s               kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s               kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s               kubelet          Starting kubelet.
	  Normal  NodeReady                2m8s               kubelet          Node functional-553844 status is now: NodeReady
	  Normal  RegisteredNode           2m5s               node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           78s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node functional-553844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node functional-553844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x7 over 32s)  kubelet          Node functional-553844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25s                node-controller  Node functional-553844 event: Registered Node functional-553844 in Controller
	
	
	==> dmesg <==
	[  +0.603762] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.520372] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.078621] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.049083] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.190042] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.140022] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.285394] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +8.132216] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.075744] kauditd_printk_skb: 100 callbacks suppressed
	[Sep16 10:35] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +0.082290] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.215887] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.912179] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[ +21.316095] systemd-fstab-generator[4674]: Ignoring "noauto" option for root device
	[  +0.074178] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.066789] systemd-fstab-generator[4686]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[4700]: Ignoring "noauto" option for root device
	[  +0.128627] systemd-fstab-generator[4712]: Ignoring "noauto" option for root device
	[  +0.261837] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +7.709349] systemd-fstab-generator[4854]: Ignoring "noauto" option for root device
	[  +0.074913] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.702685] systemd-fstab-generator[4977]: Ignoring "noauto" option for root device
	[Sep16 10:36] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.334379] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.139453] systemd-fstab-generator[5796]: Ignoring "noauto" option for root device
	
	
	==> etcd [161c7c3a6dbc9f43b2edc204c828f8b7b5673da629beae63db044d512333e2ff] <==
	{"level":"info","ts":"2024-09-16T10:36:01.399752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:01.404521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:36:01.412218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:36:01.412273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:36:01.412554Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.412584Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:36:01.415172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:01.415247Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:36:02.339007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:36:02.339289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.339373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 4"}
	{"level":"info","ts":"2024-09-16T10:36:02.345885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:36:02.345893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.346138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.346171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:36:02.345925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.347252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:36:02.348114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:36:02.348659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	
	
	==> etcd [dda8bc32e425e06c63a2ccb84bdd071dc515520e554e6e3a5a8b376e9a65c15a] <==
	{"level":"info","ts":"2024-09-16T10:34:56.955132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:56.955177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.955203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:56.959113Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:functional-553844 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:56.959223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:56.959702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.959718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:56.960394Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.960508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:56.961360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:56.961615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-09-16T10:35:43.615417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:35:43.615457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	{"level":"warn","ts":"2024-09-16T10:35:43.615668Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.615755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715379Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:35:43.715441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.230:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:35:43.716847Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f4acae94ef986412","current-leader-member-id":"f4acae94ef986412"}
	{"level":"info","ts":"2024-09-16T10:35:43.720365Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720475Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-09-16T10:35:43.720485Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-553844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"]}
	
	
	==> kernel <==
	 10:36:32 up 2 min,  0 users,  load average: 0.97, 0.33, 0.12
	Linux functional-553844 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e128caccd10da3c2b236dc916c1ea036d584c77141e639345656d122c4edf5] <==
	I0916 10:36:03.700643       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:36:03.700962       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:36:03.702154       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:36:03.702186       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:36:03.702192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:36:03.702197       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:36:03.704489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:36:03.704920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:36:03.704998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:36:03.705227       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:36:03.705335       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:36:03.705520       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:36:03.709308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:36:03.709342       1 policy_source.go:224] refreshing policies
	I0916 10:36:03.714744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:36:03.724995       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:36:03.733976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:36:04.601449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:36:05.413610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:36:05.430933       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:36:05.470801       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:36:05.494981       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:36:05.501594       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:36:07.306638       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:36:07.353251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a8a2455326fe0510166f5919a7ba808e002c9e7e6f6bac0073ebcdf2617d2c12] <==
	I0916 10:35:10.821388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:35:10.821418       1 policy_source.go:224] refreshing policies
	I0916 10:35:10.848027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:35:10.848431       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:35:10.848456       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:35:10.848514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:35:10.848521       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:35:10.891021       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:35:10.891238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:35:10.893720       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:35:10.894833       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:35:10.894861       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:35:10.895008       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:35:10.912774       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:35:10.913152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:35:10.920344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:35:11.693112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:35:11.908543       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.230]
	I0916 10:35:11.914737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:35:12.098488       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:35:12.108702       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:35:12.144954       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:35:12.176210       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:35:12.183000       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:35:43.644862       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [7b4648b5566f070359fd2b71b0a6be370ac0bf268f558065e56f6c081c9da147] <==
	I0916 10:35:14.120843       1 shared_informer.go:320] Caches are synced for HPA
	I0916 10:35:14.121152       1 shared_informer.go:320] Caches are synced for TTL
	I0916 10:35:14.122526       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:35:14.122616       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:35:14.122690       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:35:14.122803       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:35:14.123280       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:35:14.124941       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:35:14.144150       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 10:35:14.146147       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:35:14.148698       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 10:35:14.153801       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:35:14.209749       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:35:14.242927       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:35:14.298281       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.321144       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:35:14.321212       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:35:14.326094       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:35:14.534087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.245988ms"
	I0916 10:35:14.534305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.383µs"
	I0916 10:35:14.753631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:35:14.816647       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:35:17.621436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.997µs"
	I0916 10:35:41.634518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	
	
	==> kube-controller-manager [c9f67c6f5bac2b4d892ff7cea8997f9b3637e462d67156858bf6b4bc4872dbb8] <==
	I0916 10:36:07.006747       1 shared_informer.go:320] Caches are synced for deployment
	I0916 10:36:07.009845       1 shared_informer.go:320] Caches are synced for node
	I0916 10:36:07.009955       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 10:36:07.010006       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:36:07.010065       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:36:07.010073       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:36:07.010176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-553844"
	I0916 10:36:07.017945       1 shared_informer.go:320] Caches are synced for namespace
	I0916 10:36:07.018019       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:36:07.021511       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:36:07.021586       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:36:07.021664       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-553844"
	I0916 10:36:07.021710       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:36:07.120592       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:36:07.158564       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:36:07.199273       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:36:07.211433       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:07.256200       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:36:07.260949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="260.629259ms"
	I0916 10:36:07.261107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.998µs"
	I0916 10:36:07.627278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:36:07.687093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:36:09.540766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.500721ms"
	I0916 10:36:09.541443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.335µs"
	
	
	==> kube-proxy [5ef8ee89662fcae36d3705fd7eef6e5e8e8ed2765a0c899ea2692bc5e55770bb] <==
	W0916 10:34:58.431668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:58.431778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:58.431838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.284989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.285188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.332364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.332464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:34:59.470296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:34:59.470425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.798494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.798626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:01.949792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:01.949869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:02.221487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:02.221565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:06.652928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:06.652990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.272641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.272703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:35:07.363931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	E0916 10:35:07.363993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	I0916 10:35:14.930499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:35:15.331242       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:35:16.430835       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f6cef4575c2c36cab82d9941cc254a3c3977cd977294bd11ce7e392d1ed1ba87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:36:05.087142       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:36:05.094687       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	E0916 10:36:05.094768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:36:05.128908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:36:05.128955       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:36:05.128978       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:36:05.131583       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:36:05.131810       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:36:05.131834       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:05.133708       1 config.go:199] "Starting service config controller"
	I0916 10:36:05.133764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:36:05.133809       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:36:05.133827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:36:05.134323       1 config.go:328] "Starting node config controller"
	I0916 10:36:05.134353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:36:05.234169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:36:05.234184       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:36:05.234413       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [281ad6489fa866a8728b85bb6026eaba03f414f1541e54e8bb20b881d74c5986] <==
	I0916 10:36:01.918697       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:36:03.635711       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:36:03.637927       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:36:03.638183       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:36:03.638223       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:36:03.699405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:36:03.699443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:36:03.708723       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:36:03.708883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:36:03.708916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:36:03.725362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:36:03.809763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c9566037419faa5d15b1a25d437bab469eb973d38022593dd1fcea875b372030] <==
	I0916 10:35:09.773229       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:35:10.768440       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:35:10.768857       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:35:10.768917       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:35:10.768943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:35:10.817479       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:35:10.817581       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:35:10.824338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:35:10.824417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:10.825100       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:35:10.825460       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:35:10.925324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:35:43.621150       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:35:43.621340       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 10:35:43.621677       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:35:43.622018       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:36:00 functional-553844 kubelet[4984]: W0916 10:36:00.910601    4984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0": dial tcp 192.168.39.230:8441: connect: connection refused
	Sep 16 10:36:00 functional-553844 kubelet[4984]: E0916 10:36:00.910663    4984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-553844&limit=500&resourceVersion=0\": dial tcp 192.168.39.230:8441: connect: connection refused" logger="UnhandledError"
	Sep 16 10:36:01 functional-553844 kubelet[4984]: I0916 10:36:01.687815    4984 kubelet_node_status.go:72] "Attempting to register node" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.749683    4984 kubelet_node_status.go:111] "Node was previously registered" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.750196    4984 kubelet_node_status.go:75] "Successfully registered node" node="functional-553844"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: E0916 10:36:03.750257    4984 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-553844\": node \"functional-553844\" not found"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.752874    4984 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: I0916 10:36:03.753933    4984 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:36:03 functional-553844 kubelet[4984]: E0916 10:36:03.767512    4984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"functional-553844\" not found"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.085150    4984 apiserver.go:52] "Watching apiserver"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.091164    4984 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-553844" podUID="7f3b5ce9-dbc7-45d3-8a46-1d51af0f5cac"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.105623    4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.124250    4984 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-553844"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151243    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-lib-modules\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151300    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f41228d6-b7ff-4315-b9c5-05b5cc4d0acd-tmp\") pod \"storage-provisioner\" (UID: \"f41228d6-b7ff-4315-b9c5-05b5cc4d0acd\") " pod="kube-system/storage-provisioner"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.151318    4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709f753-5ea7-43c8-9573-107c8507e92b-xtables-lock\") pod \"kube-proxy-8d5zp\" (UID: \"7709f753-5ea7-43c8-9573-107c8507e92b\") " pod="kube-system/kube-proxy-8d5zp"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.189195    4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf351cdb4e05fb19a16881fc8f9a8bc" path="/var/lib/kubelet/pods/0cf351cdb4e05fb19a16881fc8f9a8bc/volumes"
	Sep 16 10:36:04 functional-553844 kubelet[4984]: I0916 10:36:04.191552    4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-553844" podStartSLOduration=0.19153653 podStartE2EDuration="191.53653ms" podCreationTimestamp="2024-09-16 10:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:36:04.191347015 +0000 UTC m=+4.208440213" watchObservedRunningTime="2024-09-16 10:36:04.19153653 +0000 UTC m=+4.208629709"
	Sep 16 10:36:09 functional-553844 kubelet[4984]: I0916 10:36:09.508237    4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177303    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:10 functional-553844 kubelet[4984]: E0916 10:36:10.177327    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482970176966980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.178991    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:20 functional-553844 kubelet[4984]: E0916 10:36:20.179091    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482980178689452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.181981    4984 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:36:30 functional-553844 kubelet[4984]: E0916 10:36:30.182008    4984 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726482990181444413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157170,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [11c7df787d684e9cc5ca8f6cf633ac9055d7ef72c0a76d54b25fd4a3d62f7b02] <==
	I0916 10:34:56.077531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:34:58.308783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:34:58.325776       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0916 10:34:59.385726       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:02.837859       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:35:07.096688       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:35:10.935925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:35:10.936824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	I0916 10:35:10.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_6476f869-e006-4732-b59f-a625eeed2789 became leader
	I0916 10:35:11.037327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_6476f869-e006-4732-b59f-a625eeed2789!
	
	
	==> storage-provisioner [410bd23d1eb3a430c5d377563c819cc6316d348e66c37cda9c98f030584522d1] <==
	I0916 10:36:04.804572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:36:04.881510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:36:04.902536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:36:22.325954       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:36:22.326349       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	I0916 10:36:22.327877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e694f165-5b09-46c9-81ea-a2730989eaff", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700 became leader
	I0916 10:36:22.428646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553844_3b4a5fb5-0feb-4787-a4fd-6f23adb25700!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:36:31.743478   20216 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553844 -n functional-553844
helpers_test.go:261: (dbg) Run:  kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (457.863µs)
helpers_test.go:263: kubectl --context functional-553844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/NodeLabels (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-553844 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context functional-553844 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (409.296µs)
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-553844 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 service list
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"|-------------|------------|--------------|-----|\n|  NAMESPACE  |    NAME    | TARGET PORT  | URL |\n|-------------|------------|--------------|-----|\n| default     | kubernetes | No node port |     |\n| kube-system | kube-dns   | No node port |     |\n|-------------|------------|--------------|-----|\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 service list -o json
functional_test.go:1494: Took "286.514025ms" to run "out/minikube-linux-amd64 -p functional-553844 service list -o json"
functional_test.go:1498: expected the json of 'service list' to include "hello-node" but got *"[{\"Namespace\":\"default\",\"Name\":\"kubernetes\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kube-system\",\"Name\":\"kube-dns\",\"URLs\":[],\"PortNames\":[\"No node port\"]}]"*. args: "out/minikube-linux-amd64 -p functional-553844 service list -o json"
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 service --namespace=default --https --url hello-node: exit status 115 (259.081077ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1511: failed to get service url. args "out/minikube-linux-amd64 -p functional-553844 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 service hello-node --url --format={{.IP}}: exit status 115 (277.269509ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-553844 service hello-node --url --format={{.IP}}": exit status 115
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdany-port525922369/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726482987895365534" to /tmp/TestFunctionalparallelMountCmdany-port525922369/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726482987895365534" to /tmp/TestFunctionalparallelMountCmdany-port525922369/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726482987895365534" to /tmp/TestFunctionalparallelMountCmdany-port525922369/001/test-1726482987895365534
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.245066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 10:36 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 10:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 10:36 test-1726482987895365534
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh cat /mount-9p/test-1726482987895365534
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-553844 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-553844 replace --force -f testdata/busybox-mount-test.yaml: fork/exec /usr/local/bin/kubectl: exec format error (471.814µs)
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-553844 replace --force -f testdata/busybox-mount-test.yaml" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (222.024219ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=1000,access=any,msize=65536,trans=tcp,noextend,port=37879)
	total 2
	-rw-r--r-- 1 docker docker 24 Sep 16 10:36 created-by-test
	-rw-r--r-- 1 docker docker 24 Sep 16 10:36 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Sep 16 10:36 test-1726482987895365534
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-553844 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdany-port525922369/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdany-port525922369/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port525922369/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:37879
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port525922369/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdany-port525922369/001:/mount-9p --alsologtostderr -v=1] stderr:
I0916 10:36:27.954924   19330 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:27.955362   19330 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:27.955377   19330 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:27.955384   19330 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:27.955859   19330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:27.956209   19330 mustload.go:65] Loading cluster: functional-553844
I0916 10:36:27.957010   19330 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:27.958038   19330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:27.958091   19330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:27.974877   19330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
I0916 10:36:27.975400   19330 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:27.976085   19330 main.go:141] libmachine: Using API Version  1
I0916 10:36:27.976102   19330 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:27.976673   19330 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:27.976890   19330 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:27.979874   19330 host.go:66] Checking if "functional-553844" exists ...
I0916 10:36:27.980238   19330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:27.980282   19330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:27.998093   19330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
I0916 10:36:27.999101   19330 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:27.999783   19330 main.go:141] libmachine: Using API Version  1
I0916 10:36:27.999801   19330 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:28.000476   19330 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:28.000720   19330 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:28.000881   19330 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:28.001068   19330 main.go:141] libmachine: (functional-553844) Calling .GetIP
I0916 10:36:28.004348   19330 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:28.004687   19330 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:28.004748   19330 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:28.005448   19330 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:28.007897   19330 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port525922369/001 into VM as /mount-9p ...
I0916 10:36:28.009379   19330 out.go:177]   - Mount type:   9p
I0916 10:36:28.010761   19330 out.go:177]   - User ID:      docker
I0916 10:36:28.012115   19330 out.go:177]   - Group ID:     docker
I0916 10:36:28.013435   19330 out.go:177]   - Version:      9p2000.L
I0916 10:36:28.014701   19330 out.go:177]   - Message Size: 262144
I0916 10:36:28.015995   19330 out.go:177]   - Options:      map[]
I0916 10:36:28.017304   19330 out.go:177]   - Bind Address: 192.168.39.1:37879
I0916 10:36:28.018684   19330 out.go:177] * Userspace file server: 
I0916 10:36:28.018821   19330 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0916 10:36:28.018926   19330 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:28.022228   19330 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:28.022726   19330 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:28.022786   19330 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:28.023001   19330 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:28.023201   19330 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:28.023412   19330 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:28.023557   19330 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:28.133713   19330 mount.go:180] unmount for /mount-9p ran successfully
I0916 10:36:28.133740   19330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0916 10:36:28.150721   19330 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37879,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I0916 10:36:28.194203   19330 main.go:125] stdlog: ufs.go:141 connected
I0916 10:36:28.194635   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tversion tag 65535 msize 65536 version '9P2000.L'
I0916 10:36:28.194685   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rversion tag 65535 msize 65536 version '9P2000'
I0916 10:36:28.195224   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0916 10:36:28.195356   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rattach tag 0 aqid (20fa077 fa67bb73 'd')
I0916 10:36:28.195632   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 0
I0916 10:36:28.195738   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa077 fa67bb73 'd') m d775 at 0 mt 1726482987 l 4096 t 0 d 0 ext )
I0916 10:36:28.202615   19330 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/.mount-process: {Name:mk4476c3bce178f7b566eb19dfc31cf749ea40e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0916 10:36:28.202798   19330 mount.go:105] mount successful: ""
I0916 10:36:28.204814   19330 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port525922369/001 to /mount-9p
I0916 10:36:28.206237   19330 out.go:201] 
I0916 10:36:28.207454   19330 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0916 10:36:28.998698   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 0
I0916 10:36:28.998837   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa077 fa67bb73 'd') m d775 at 0 mt 1726482987 l 4096 t 0 d 0 ext )
I0916 10:36:29.000901   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 1 
I0916 10:36:29.000952   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 
I0916 10:36:29.001165   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Topen tag 0 fid 1 mode 0
I0916 10:36:29.001221   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Ropen tag 0 qid (20fa077 fa67bb73 'd') iounit 0
I0916 10:36:29.001392   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 0
I0916 10:36:29.001496   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa077 fa67bb73 'd') m d775 at 0 mt 1726482987 l 4096 t 0 d 0 ext )
I0916 10:36:29.001760   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 0 count 65512
I0916 10:36:29.001953   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 258
I0916 10:36:29.002182   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 258 count 65254
I0916 10:36:29.002213   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.002433   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 258 count 65512
I0916 10:36:29.002465   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.002686   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 2 0:'test-1726482987895365534' 
I0916 10:36:29.002720   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa07a fa67bb73 '') 
I0916 10:36:29.002936   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.003056   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('test-1726482987895365534' 'jenkins' 'balintp' '' q (20fa07a fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.003366   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.003452   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('test-1726482987895365534' 'jenkins' 'balintp' '' q (20fa07a fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.003691   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.003740   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.003957   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0916 10:36:29.003997   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa079 fa67bb73 '') 
I0916 10:36:29.004269   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.004365   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa079 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.004564   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.004682   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa079 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.004950   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.004980   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.005362   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0916 10:36:29.005415   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa078 fa67bb73 '') 
I0916 10:36:29.005597   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.005683   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa078 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.005877   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.005967   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa078 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.006203   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.006230   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.006433   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 258 count 65512
I0916 10:36:29.006475   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.006666   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 1
I0916 10:36:29.006704   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.231701   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 1 0:'test-1726482987895365534' 
I0916 10:36:29.231775   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa07a fa67bb73 '') 
I0916 10:36:29.232015   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 1
I0916 10:36:29.232135   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('test-1726482987895365534' 'jenkins' 'balintp' '' q (20fa07a fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.232408   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 1 newfid 2 
I0916 10:36:29.232455   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 
I0916 10:36:29.232622   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Topen tag 0 fid 2 mode 0
I0916 10:36:29.232685   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Ropen tag 0 qid (20fa07a fa67bb73 '') iounit 0
I0916 10:36:29.232861   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 1
I0916 10:36:29.232959   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('test-1726482987895365534' 'jenkins' 'balintp' '' q (20fa07a fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.233156   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 2 offset 0 count 65512
I0916 10:36:29.233212   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 24
I0916 10:36:29.233358   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 2 offset 24 count 65512
I0916 10:36:29.233412   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.233609   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 2 offset 24 count 65512
I0916 10:36:29.233651   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.233846   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.233887   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.234028   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 1
I0916 10:36:29.234061   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.444957   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 0
I0916 10:36:29.445091   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa077 fa67bb73 'd') m d775 at 0 mt 1726482987 l 4096 t 0 d 0 ext )
I0916 10:36:29.452172   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 1 
I0916 10:36:29.452242   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 
I0916 10:36:29.452430   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Topen tag 0 fid 1 mode 0
I0916 10:36:29.452501   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Ropen tag 0 qid (20fa077 fa67bb73 'd') iounit 0
I0916 10:36:29.452745   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 0
I0916 10:36:29.452888   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa077 fa67bb73 'd') m d775 at 0 mt 1726482987 l 4096 t 0 d 0 ext )
I0916 10:36:29.453347   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 0 count 65512
I0916 10:36:29.453495   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 258
I0916 10:36:29.453866   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 258 count 65254
I0916 10:36:29.453911   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.454183   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 258 count 65512
I0916 10:36:29.454223   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.454470   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 2 0:'test-1726482987895365534' 
I0916 10:36:29.454514   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa07a fa67bb73 '') 
I0916 10:36:29.454748   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.454840   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('test-1726482987895365534' 'jenkins' 'balintp' '' q (20fa07a fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.455119   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.455227   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('test-1726482987895365534' 'jenkins' 'balintp' '' q (20fa07a fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.455615   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.455649   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.455894   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0916 10:36:29.455932   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa079 fa67bb73 '') 
I0916 10:36:29.456133   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.456235   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa079 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.456443   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.456514   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa079 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.456711   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.456738   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.456954   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0916 10:36:29.457016   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rwalk tag 0 (20fa078 fa67bb73 '') 
I0916 10:36:29.457305   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.457409   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa078 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.457620   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tstat tag 0 fid 2
I0916 10:36:29.457708   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa078 fa67bb73 '') m 644 at 0 mt 1726482987 l 24 t 0 d 0 ext )
I0916 10:36:29.457917   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 2
I0916 10:36:29.457951   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.458160   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tread tag 0 fid 1 offset 258 count 65512
I0916 10:36:29.458196   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rread tag 0 count 0
I0916 10:36:29.458447   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 1
I0916 10:36:29.458478   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.460943   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0916 10:36:29.461004   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rerror tag 0 ename 'file not found' ecode 0
I0916 10:36:29.706621   19330 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.230:46256 Tclunk tag 0 fid 0
I0916 10:36:29.706694   19330 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.230:46256 Rclunk tag 0
I0916 10:36:29.710937   19330 main.go:125] stdlog: ufs.go:147 disconnected
I0916 10:36:29.929866   19330 out.go:177] * Unmounting /mount-9p ...
I0916 10:36:29.931142   19330 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0916 10:36:29.971520   19330 mount.go:180] unmount for /mount-9p ran successfully
I0916 10:36:29.971635   19330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/.mount-process: {Name:mk4476c3bce178f7b566eb19dfc31cf749ea40e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0916 10:36:29.973229   19330 out.go:201] 
W0916 10:36:29.974474   19330 out.go:270] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0916 10:36:29.975623   19330 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 service hello-node --url: exit status 115 (272.179776ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1561: failed to get service url. args: "out/minikube-linux-amd64 -p functional-553844 service hello-node --url": exit status 115
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (2.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-244475 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-244475 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": fork/exec /usr/local/bin/kubectl: exec format error (546.117µs)
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-244475 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": fork/exec /usr/local/bin/kubectl: exec format error
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-244475 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.56614921s)
helpers_test.go:252: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-553844 image build -t     | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | localhost/my-image:functional-553844 |                   |         |         |                     |                     |
	|         | testdata/build --alsologtostderr     |                   |         |         |                     |                     |
	| image   | functional-553844 image ls           | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	| delete  | -p functional-553844                 | functional-553844 | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	| start   | -p ha-244475 --wait=true             | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:41 UTC |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=kvm2                        |                   |         |         |                     |                     |
	|         | --container-runtime=crio             |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- apply -f             | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- rollout status       | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- get pods -o          | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- get pods -o          | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-7bhqg --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-d4m5s --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-t6fmb --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-7bhqg --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-d4m5s --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-t6fmb --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-7bhqg -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-d4m5s -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-t6fmb -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- get pods -o          | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-7bhqg              |                   |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-7bhqg -- sh        |                   |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-d4m5s              |                   |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-d4m5s -- sh        |                   |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-t6fmb              |                   |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl | -p ha-244475 -- exec                 | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | busybox-7dff88458-t6fmb -- sh        |                   |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |                   |         |         |                     |                     |
	| node    | add -p ha-244475 -v=7                | ha-244475         | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:42 UTC |
	|         | --alsologtostderr                    |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:38:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:38:12.200712   22121 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:38:12.200823   22121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:38:12.200832   22121 out.go:358] Setting ErrFile to fd 2...
	I0916 10:38:12.200836   22121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:38:12.201073   22121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:38:12.201666   22121 out.go:352] Setting JSON to false
	I0916 10:38:12.202552   22121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1242,"bootTime":1726481850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:38:12.202649   22121 start.go:139] virtualization: kvm guest
	I0916 10:38:12.204909   22121 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:38:12.206153   22121 notify.go:220] Checking for updates...
	I0916 10:38:12.206162   22121 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:38:12.207508   22121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:38:12.208635   22121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:12.209868   22121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.211054   22121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:38:12.212157   22121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:38:12.213282   22121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:38:12.247704   22121 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:38:12.248934   22121 start.go:297] selected driver: kvm2
	I0916 10:38:12.248946   22121 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:38:12.248965   22121 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:38:12.249634   22121 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:12.249717   22121 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:38:12.264515   22121 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:38:12.264557   22121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:38:12.264783   22121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:38:12.264813   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:12.264852   22121 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:38:12.264862   22121 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:38:12.264904   22121 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:12.264991   22121 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:12.266715   22121 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:38:12.267811   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:12.267865   22121 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:38:12.267877   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:12.267958   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:12.267971   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:12.268264   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:12.268287   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json: {Name:mk850b432e3492662a38e4b0f11a836bf86e02aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:12.268433   22121 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:38:12.268468   22121 start.go:364] duration metric: took 18.641µs to acquireMachinesLock for "ha-244475"
	I0916 10:38:12.268490   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:12.268553   22121 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:38:12.270059   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:38:12.270184   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:12.270223   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:12.284586   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0916 10:38:12.285055   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:12.285574   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:12.285594   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:12.285978   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:12.286124   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:12.286277   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:12.286414   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:38:12.286438   22121 client.go:168] LocalClient.Create starting
	I0916 10:38:12.286467   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:38:12.286500   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:12.286515   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:12.286575   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:38:12.286594   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:12.286606   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:12.286627   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:38:12.286639   22121 main.go:141] libmachine: (ha-244475) Calling .PreCreateCheck
	I0916 10:38:12.286973   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:12.287297   22121 main.go:141] libmachine: Creating machine...
	I0916 10:38:12.287309   22121 main.go:141] libmachine: (ha-244475) Calling .Create
	I0916 10:38:12.287457   22121 main.go:141] libmachine: (ha-244475) Creating KVM machine...
	I0916 10:38:12.288681   22121 main.go:141] libmachine: (ha-244475) DBG | found existing default KVM network
	I0916 10:38:12.289333   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.289200   22144 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0916 10:38:12.289353   22121 main.go:141] libmachine: (ha-244475) DBG | created network xml: 
	I0916 10:38:12.289365   22121 main.go:141] libmachine: (ha-244475) DBG | <network>
	I0916 10:38:12.289372   22121 main.go:141] libmachine: (ha-244475) DBG |   <name>mk-ha-244475</name>
	I0916 10:38:12.289384   22121 main.go:141] libmachine: (ha-244475) DBG |   <dns enable='no'/>
	I0916 10:38:12.289392   22121 main.go:141] libmachine: (ha-244475) DBG |   
	I0916 10:38:12.289404   22121 main.go:141] libmachine: (ha-244475) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:38:12.289414   22121 main.go:141] libmachine: (ha-244475) DBG |     <dhcp>
	I0916 10:38:12.289426   22121 main.go:141] libmachine: (ha-244475) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:38:12.289440   22121 main.go:141] libmachine: (ha-244475) DBG |     </dhcp>
	I0916 10:38:12.289470   22121 main.go:141] libmachine: (ha-244475) DBG |   </ip>
	I0916 10:38:12.289491   22121 main.go:141] libmachine: (ha-244475) DBG |   
	I0916 10:38:12.289503   22121 main.go:141] libmachine: (ha-244475) DBG | </network>
	I0916 10:38:12.289512   22121 main.go:141] libmachine: (ha-244475) DBG | 
	I0916 10:38:12.294272   22121 main.go:141] libmachine: (ha-244475) DBG | trying to create private KVM network mk-ha-244475 192.168.39.0/24...
	I0916 10:38:12.356537   22121 main.go:141] libmachine: (ha-244475) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 ...
	I0916 10:38:12.356564   22121 main.go:141] libmachine: (ha-244475) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:38:12.356583   22121 main.go:141] libmachine: (ha-244475) DBG | private KVM network mk-ha-244475 192.168.39.0/24 created
	I0916 10:38:12.356612   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.356478   22144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.356634   22121 main.go:141] libmachine: (ha-244475) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:38:12.603819   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.603693   22144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa...
	I0916 10:38:12.714132   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.713994   22144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/ha-244475.rawdisk...
	I0916 10:38:12.714162   22121 main.go:141] libmachine: (ha-244475) DBG | Writing magic tar header
	I0916 10:38:12.714174   22121 main.go:141] libmachine: (ha-244475) DBG | Writing SSH key tar header
	I0916 10:38:12.714185   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.714123   22144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 ...
	I0916 10:38:12.714208   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475
	I0916 10:38:12.714276   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 (perms=drwx------)
	I0916 10:38:12.714299   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:38:12.714310   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:38:12.714346   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:38:12.714364   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.714379   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:38:12.714393   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:38:12.714412   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:38:12.714424   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:38:12.714456   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:38:12.714472   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:38:12.714480   22121 main.go:141] libmachine: (ha-244475) Creating domain...
	I0916 10:38:12.714493   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home
	I0916 10:38:12.714503   22121 main.go:141] libmachine: (ha-244475) DBG | Skipping /home - not owner
	I0916 10:38:12.715516   22121 main.go:141] libmachine: (ha-244475) define libvirt domain using xml: 
	I0916 10:38:12.715535   22121 main.go:141] libmachine: (ha-244475) <domain type='kvm'>
	I0916 10:38:12.715541   22121 main.go:141] libmachine: (ha-244475)   <name>ha-244475</name>
	I0916 10:38:12.715549   22121 main.go:141] libmachine: (ha-244475)   <memory unit='MiB'>2200</memory>
	I0916 10:38:12.715560   22121 main.go:141] libmachine: (ha-244475)   <vcpu>2</vcpu>
	I0916 10:38:12.715567   22121 main.go:141] libmachine: (ha-244475)   <features>
	I0916 10:38:12.715594   22121 main.go:141] libmachine: (ha-244475)     <acpi/>
	I0916 10:38:12.715613   22121 main.go:141] libmachine: (ha-244475)     <apic/>
	I0916 10:38:12.715643   22121 main.go:141] libmachine: (ha-244475)     <pae/>
	I0916 10:38:12.715667   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.715677   22121 main.go:141] libmachine: (ha-244475)   </features>
	I0916 10:38:12.715691   22121 main.go:141] libmachine: (ha-244475)   <cpu mode='host-passthrough'>
	I0916 10:38:12.715701   22121 main.go:141] libmachine: (ha-244475)   
	I0916 10:38:12.715709   22121 main.go:141] libmachine: (ha-244475)   </cpu>
	I0916 10:38:12.715717   22121 main.go:141] libmachine: (ha-244475)   <os>
	I0916 10:38:12.715726   22121 main.go:141] libmachine: (ha-244475)     <type>hvm</type>
	I0916 10:38:12.715737   22121 main.go:141] libmachine: (ha-244475)     <boot dev='cdrom'/>
	I0916 10:38:12.715746   22121 main.go:141] libmachine: (ha-244475)     <boot dev='hd'/>
	I0916 10:38:12.715758   22121 main.go:141] libmachine: (ha-244475)     <bootmenu enable='no'/>
	I0916 10:38:12.715788   22121 main.go:141] libmachine: (ha-244475)   </os>
	I0916 10:38:12.715799   22121 main.go:141] libmachine: (ha-244475)   <devices>
	I0916 10:38:12.715810   22121 main.go:141] libmachine: (ha-244475)     <disk type='file' device='cdrom'>
	I0916 10:38:12.715840   22121 main.go:141] libmachine: (ha-244475)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/boot2docker.iso'/>
	I0916 10:38:12.715852   22121 main.go:141] libmachine: (ha-244475)       <target dev='hdc' bus='scsi'/>
	I0916 10:38:12.715861   22121 main.go:141] libmachine: (ha-244475)       <readonly/>
	I0916 10:38:12.715870   22121 main.go:141] libmachine: (ha-244475)     </disk>
	I0916 10:38:12.715875   22121 main.go:141] libmachine: (ha-244475)     <disk type='file' device='disk'>
	I0916 10:38:12.715881   22121 main.go:141] libmachine: (ha-244475)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:38:12.715891   22121 main.go:141] libmachine: (ha-244475)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/ha-244475.rawdisk'/>
	I0916 10:38:12.715896   22121 main.go:141] libmachine: (ha-244475)       <target dev='hda' bus='virtio'/>
	I0916 10:38:12.715903   22121 main.go:141] libmachine: (ha-244475)     </disk>
	I0916 10:38:12.715907   22121 main.go:141] libmachine: (ha-244475)     <interface type='network'>
	I0916 10:38:12.715914   22121 main.go:141] libmachine: (ha-244475)       <source network='mk-ha-244475'/>
	I0916 10:38:12.715918   22121 main.go:141] libmachine: (ha-244475)       <model type='virtio'/>
	I0916 10:38:12.715925   22121 main.go:141] libmachine: (ha-244475)     </interface>
	I0916 10:38:12.715929   22121 main.go:141] libmachine: (ha-244475)     <interface type='network'>
	I0916 10:38:12.715936   22121 main.go:141] libmachine: (ha-244475)       <source network='default'/>
	I0916 10:38:12.715941   22121 main.go:141] libmachine: (ha-244475)       <model type='virtio'/>
	I0916 10:38:12.715946   22121 main.go:141] libmachine: (ha-244475)     </interface>
	I0916 10:38:12.715950   22121 main.go:141] libmachine: (ha-244475)     <serial type='pty'>
	I0916 10:38:12.715966   22121 main.go:141] libmachine: (ha-244475)       <target port='0'/>
	I0916 10:38:12.715977   22121 main.go:141] libmachine: (ha-244475)     </serial>
	I0916 10:38:12.715987   22121 main.go:141] libmachine: (ha-244475)     <console type='pty'>
	I0916 10:38:12.715998   22121 main.go:141] libmachine: (ha-244475)       <target type='serial' port='0'/>
	I0916 10:38:12.716016   22121 main.go:141] libmachine: (ha-244475)     </console>
	I0916 10:38:12.716026   22121 main.go:141] libmachine: (ha-244475)     <rng model='virtio'>
	I0916 10:38:12.716036   22121 main.go:141] libmachine: (ha-244475)       <backend model='random'>/dev/random</backend>
	I0916 10:38:12.716045   22121 main.go:141] libmachine: (ha-244475)     </rng>
	I0916 10:38:12.716065   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.716082   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.716090   22121 main.go:141] libmachine: (ha-244475)   </devices>
	I0916 10:38:12.716101   22121 main.go:141] libmachine: (ha-244475) </domain>
	I0916 10:38:12.716111   22121 main.go:141] libmachine: (ha-244475) 
	I0916 10:38:12.720528   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:4e:1b:22 in network default
	I0916 10:38:12.721005   22121 main.go:141] libmachine: (ha-244475) Ensuring networks are active...
	I0916 10:38:12.721018   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:12.721698   22121 main.go:141] libmachine: (ha-244475) Ensuring network default is active
	I0916 10:38:12.722026   22121 main.go:141] libmachine: (ha-244475) Ensuring network mk-ha-244475 is active
	I0916 10:38:12.722616   22121 main.go:141] libmachine: (ha-244475) Getting domain xml...
	I0916 10:38:12.723368   22121 main.go:141] libmachine: (ha-244475) Creating domain...
	I0916 10:38:13.892889   22121 main.go:141] libmachine: (ha-244475) Waiting to get IP...
	I0916 10:38:13.893726   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:13.894130   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:13.894170   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:13.894127   22144 retry.go:31] will retry after 194.671276ms: waiting for machine to come up
	I0916 10:38:14.090477   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.090800   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.090825   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.090753   22144 retry.go:31] will retry after 351.659131ms: waiting for machine to come up
	I0916 10:38:14.444409   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.444864   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.444896   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.444830   22144 retry.go:31] will retry after 382.219059ms: waiting for machine to come up
	I0916 10:38:14.828362   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.828800   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.828826   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.828748   22144 retry.go:31] will retry after 385.017595ms: waiting for machine to come up
	I0916 10:38:15.215350   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:15.215732   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:15.215758   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:15.215688   22144 retry.go:31] will retry after 603.255872ms: waiting for machine to come up
	I0916 10:38:15.820323   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:15.820668   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:15.820694   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:15.820630   22144 retry.go:31] will retry after 768.911433ms: waiting for machine to come up
	I0916 10:38:16.591945   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:16.592337   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:16.592361   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:16.592300   22144 retry.go:31] will retry after 1.01448771s: waiting for machine to come up
	I0916 10:38:17.607844   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:17.608259   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:17.608281   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:17.608225   22144 retry.go:31] will retry after 1.028283296s: waiting for machine to come up
	I0916 10:38:18.638495   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:18.638879   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:18.638909   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:18.638842   22144 retry.go:31] will retry after 1.806716733s: waiting for machine to come up
	I0916 10:38:20.447563   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:20.447961   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:20.447980   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:20.447880   22144 retry.go:31] will retry after 2.186647075s: waiting for machine to come up
	I0916 10:38:22.636294   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:22.636702   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:22.636728   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:22.636657   22144 retry.go:31] will retry after 2.089501385s: waiting for machine to come up
	I0916 10:38:24.728099   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:24.728486   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:24.728515   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:24.728423   22144 retry.go:31] will retry after 2.189050091s: waiting for machine to come up
	I0916 10:38:26.918420   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:26.918845   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:26.918870   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:26.918800   22144 retry.go:31] will retry after 2.857721999s: waiting for machine to come up
	I0916 10:38:29.779219   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:29.779636   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:29.779664   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:29.779599   22144 retry.go:31] will retry after 5.359183826s: waiting for machine to come up
	I0916 10:38:35.141883   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.142271   22121 main.go:141] libmachine: (ha-244475) Found IP for machine: 192.168.39.19
	I0916 10:38:35.142292   22121 main.go:141] libmachine: (ha-244475) Reserving static IP address...
	I0916 10:38:35.142311   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has current primary IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.142733   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find host DHCP lease matching {name: "ha-244475", mac: "52:54:00:31:d1:43", ip: "192.168.39.19"} in network mk-ha-244475
	I0916 10:38:35.214446   22121 main.go:141] libmachine: (ha-244475) DBG | Getting to WaitForSSH function...
	I0916 10:38:35.214471   22121 main.go:141] libmachine: (ha-244475) Reserved static IP address: 192.168.39.19
	I0916 10:38:35.214482   22121 main.go:141] libmachine: (ha-244475) Waiting for SSH to be available...
	I0916 10:38:35.216924   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.217367   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.217394   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.217529   22121 main.go:141] libmachine: (ha-244475) DBG | Using SSH client type: external
	I0916 10:38:35.217557   22121 main.go:141] libmachine: (ha-244475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa (-rw-------)
	I0916 10:38:35.217585   22121 main.go:141] libmachine: (ha-244475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:38:35.217594   22121 main.go:141] libmachine: (ha-244475) DBG | About to run SSH command:
	I0916 10:38:35.217608   22121 main.go:141] libmachine: (ha-244475) DBG | exit 0
	I0916 10:38:35.349373   22121 main.go:141] libmachine: (ha-244475) DBG | SSH cmd err, output: <nil>: 
	I0916 10:38:35.349683   22121 main.go:141] libmachine: (ha-244475) KVM machine creation complete!
	I0916 10:38:35.349969   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:35.350496   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:35.350688   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:35.350823   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:38:35.350834   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:35.351935   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:38:35.351949   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:38:35.351954   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:38:35.351959   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.353913   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.354208   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.354235   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.354319   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.354463   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.354605   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.354695   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.354841   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.355041   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.355053   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:38:35.464485   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:35.464507   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:38:35.464514   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.467101   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.467423   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.467458   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.467566   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.467765   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.467917   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.468144   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.468285   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.468476   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.468489   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:38:35.582051   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:38:35.582131   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:38:35.582143   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:38:35.582154   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.582407   22121 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:38:35.582432   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.582675   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.585276   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.585633   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.585660   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.585766   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.585943   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.586081   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.586209   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.586353   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.586554   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.586566   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:38:35.712268   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:38:35.712302   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.715043   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.715376   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.715404   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.715689   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.715894   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.716072   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.716203   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.716355   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.716526   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.716543   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:38:35.838701   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:35.838734   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:38:35.838786   22121 buildroot.go:174] setting up certificates
	I0916 10:38:35.838795   22121 provision.go:84] configureAuth start
	I0916 10:38:35.838807   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.839053   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:35.842260   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.842666   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.842713   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.842874   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.845198   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.845480   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.845503   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.845681   22121 provision.go:143] copyHostCerts
	I0916 10:38:35.845727   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:38:35.845766   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:38:35.845777   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:38:35.845857   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:38:35.845945   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:38:35.845971   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:38:35.845975   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:38:35.846004   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:38:35.846056   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:38:35.846073   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:38:35.846079   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:38:35.846099   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:38:35.846153   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:38:35.972514   22121 provision.go:177] copyRemoteCerts
	I0916 10:38:35.972572   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:38:35.972592   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.975467   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.975802   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.975829   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.976035   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.976192   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.976307   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.976395   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.064079   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:38:36.064162   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:38:36.088374   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:38:36.088445   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:38:36.112864   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:38:36.112943   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:38:36.137799   22121 provision.go:87] duration metric: took 298.990788ms to configureAuth
	I0916 10:38:36.137824   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:38:36.137990   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:36.138068   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.140775   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.141141   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.141167   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.141370   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.141557   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.141711   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.141862   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.142012   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:36.142173   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:36.142190   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:38:36.366260   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:38:36.366288   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:38:36.366297   22121 main.go:141] libmachine: (ha-244475) Calling .GetURL
	I0916 10:38:36.367546   22121 main.go:141] libmachine: (ha-244475) DBG | Using libvirt version 6000000
	I0916 10:38:36.369543   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.369862   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.369884   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.370034   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:38:36.370047   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:38:36.370054   22121 client.go:171] duration metric: took 24.083609722s to LocalClient.Create
	I0916 10:38:36.370077   22121 start.go:167] duration metric: took 24.083661787s to libmachine.API.Create "ha-244475"
	I0916 10:38:36.370089   22121 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:38:36.370118   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:38:36.370140   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.370345   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:38:36.370363   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.372350   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.372637   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.372658   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.372800   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.372958   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.373108   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.373239   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.459818   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:38:36.464279   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:38:36.464304   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:38:36.464360   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:38:36.464428   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:38:36.464436   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:38:36.464531   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:38:36.474459   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:38:36.498853   22121 start.go:296] duration metric: took 128.751453ms for postStartSetup
	I0916 10:38:36.498905   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:36.499551   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:36.502104   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.502435   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.502456   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.502764   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:36.502952   22121 start.go:128] duration metric: took 24.234389874s to createHost
	I0916 10:38:36.502971   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.505214   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.505496   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.505513   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.505660   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.505815   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.505951   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.506052   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.506165   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:36.506383   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:36.506406   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:38:36.618115   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483116.595653625
	
	I0916 10:38:36.618143   22121 fix.go:216] guest clock: 1726483116.595653625
	I0916 10:38:36.618151   22121 fix.go:229] Guest: 2024-09-16 10:38:36.595653625 +0000 UTC Remote: 2024-09-16 10:38:36.502962795 +0000 UTC m=+24.335728547 (delta=92.69083ms)
	I0916 10:38:36.618190   22121 fix.go:200] guest clock delta is within tolerance: 92.69083ms
	I0916 10:38:36.618197   22121 start.go:83] releasing machines lock for "ha-244475", held for 24.349718291s
	I0916 10:38:36.618226   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.618490   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:36.621177   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.621552   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.621576   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.621715   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622182   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622349   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622457   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:38:36.622504   22121 ssh_runner.go:195] Run: cat /version.json
	I0916 10:38:36.622532   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.622507   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.625311   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625336   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625701   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.625729   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625752   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.625773   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625849   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.625996   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.626070   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.626190   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.626226   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.626304   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.626347   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.626412   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.731813   22121 ssh_runner.go:195] Run: systemctl --version
	I0916 10:38:36.738034   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:38:36.897823   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:38:36.903947   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:38:36.904037   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:36.920981   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:38:36.921002   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:38:36.921062   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:38:36.936473   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:38:36.950885   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:38:36.950937   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:38:36.965062   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:38:36.979049   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:38:37.089419   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:38:37.234470   22121 docker.go:233] disabling docker service ...
	I0916 10:38:37.234570   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:38:37.249643   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:38:37.263395   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:38:37.396923   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:38:37.530822   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:38:37.545513   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:38:37.564576   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:38:37.564639   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.575771   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:38:37.575830   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.586212   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.597160   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.607962   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:38:37.619040   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.630000   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.647480   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.658746   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:38:37.668801   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:38:37.668864   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:38:37.683050   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:38:37.693269   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:37.804210   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:38:37.895246   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:38:37.895322   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:38:37.900048   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:38:37.900102   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:38:37.903675   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:38:37.941447   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:38:37.941534   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:38:37.969936   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:38:38.002089   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:38:38.003428   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:38.006180   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:38.006490   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:38.006513   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:38.006728   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:38:38.011175   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:38.024444   22121 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:38:38.024541   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:38.024583   22121 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:38:38.057652   22121 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:38:38.057726   22121 ssh_runner.go:195] Run: which lz4
	I0916 10:38:38.061778   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 10:38:38.061885   22121 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:38:38.066142   22121 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:38:38.066169   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:38:39.414979   22121 crio.go:462] duration metric: took 1.353135329s to copy over tarball
	I0916 10:38:39.415060   22121 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:38:41.361544   22121 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.94645378s)
	I0916 10:38:41.361572   22121 crio.go:469] duration metric: took 1.946564398s to extract the tarball
	I0916 10:38:41.361580   22121 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:38:41.398599   22121 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:38:41.443342   22121 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:38:41.443365   22121 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:38:41.443372   22121 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:38:41.443503   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:38:41.443571   22121 ssh_runner.go:195] Run: crio config
	I0916 10:38:41.489336   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:41.489363   22121 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:38:41.489374   22121 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:38:41.489401   22121 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:38:41.489526   22121 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:38:41.489548   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:38:41.489586   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:38:41.505696   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:38:41.505807   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:38:41.505873   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:38:41.516304   22121 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:38:41.516364   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:38:41.525992   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:38:41.542448   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:38:41.558743   22121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:38:41.575779   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 10:38:41.592567   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:38:41.596480   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:41.608839   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:41.718297   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:41.736212   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:38:41.736238   22121 certs.go:194] generating shared ca certs ...
	I0916 10:38:41.736259   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.736446   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:38:41.736500   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:38:41.736517   22121 certs.go:256] generating profile certs ...
	I0916 10:38:41.736581   22121 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:38:41.736604   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt with IP's: []
	I0916 10:38:41.887766   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt ...
	I0916 10:38:41.887792   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt: {Name:mkeee24c57991a4cf2957d59b85c7dbd3c8f2331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.887965   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key ...
	I0916 10:38:41.887976   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key: {Name:mkec5e765e721654d343964b8e5f1903226a6b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.888056   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6
	I0916 10:38:41.888070   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I0916 10:38:42.038292   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 ...
	I0916 10:38:42.038321   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6: {Name:mk7099a2c62f50aa06662b965a0c9069ae5d1f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.038481   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6 ...
	I0916 10:38:42.038493   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6: {Name:mkcc105b422dfe70444931267745dbca1edf49bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.038566   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:38:42.038652   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:38:42.038706   22121 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:38:42.038720   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt with IP's: []
	I0916 10:38:42.190304   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt ...
	I0916 10:38:42.190334   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt: {Name:mk8f534095f1a4c3c0f97ea592b35a6ed96cf75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.190493   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key ...
	I0916 10:38:42.190504   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key: {Name:mkb1fc3820bed6bb42a1e04c6b2b6ddfc43271a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.190577   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:38:42.190595   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:38:42.190607   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:38:42.190620   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:38:42.190630   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:38:42.190643   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:38:42.190653   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:38:42.190665   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:38:42.190709   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:38:42.190745   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:38:42.190754   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:38:42.190774   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:38:42.190818   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:38:42.190848   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:38:42.190886   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:38:42.190919   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.190932   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.190944   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.191452   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:38:42.217887   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:38:42.242446   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:38:42.266461   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:38:42.289939   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:38:42.313172   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:38:42.337118   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:38:42.360742   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:38:42.383602   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:38:42.406581   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:38:42.429672   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:38:42.452865   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:38:42.469058   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:38:42.474734   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:38:42.485883   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.490265   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.490308   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.495983   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:38:42.510198   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:38:42.521298   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.527236   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.527293   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.533552   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:38:42.549332   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:38:42.561819   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.568456   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.568516   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.575583   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:38:42.586818   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:38:42.590763   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:38:42.590815   22121 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:42.590883   22121 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:38:42.590943   22121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:38:42.628496   22121 cri.go:89] found id: ""
	I0916 10:38:42.628553   22121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:38:42.638691   22121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:38:42.648671   22121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:38:42.658424   22121 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:38:42.658444   22121 kubeadm.go:157] found existing configuration files:
	
	I0916 10:38:42.658483   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:38:42.667543   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:38:42.667594   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:38:42.677200   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:38:42.686120   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:38:42.686169   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:38:42.695575   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:38:42.704585   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:38:42.704673   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:38:42.714549   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:38:42.723658   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:38:42.723715   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:38:42.733164   22121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:38:42.842015   22121 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:38:42.842090   22121 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:38:42.961804   22121 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:38:42.961936   22121 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:38:42.962041   22121 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:38:42.973403   22121 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:38:42.975286   22121 out.go:235]   - Generating certificates and keys ...
	I0916 10:38:42.975379   22121 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:38:42.975457   22121 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:38:43.030083   22121 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:38:43.295745   22121 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:38:43.465239   22121 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:38:43.533050   22121 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:38:43.596361   22121 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:38:43.596500   22121 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-244475 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0916 10:38:43.798754   22121 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:38:43.798893   22121 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-244475 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0916 10:38:43.873275   22121 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:38:44.075110   22121 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:38:44.129628   22121 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:38:44.129726   22121 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:38:44.322901   22121 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:38:44.558047   22121 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:38:44.903170   22121 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:38:45.001802   22121 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:38:45.146307   22121 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:38:45.146914   22121 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:38:45.150330   22121 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:38:45.152199   22121 out.go:235]   - Booting up control plane ...
	I0916 10:38:45.152314   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:38:45.152406   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:38:45.152956   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:38:45.168296   22121 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:38:45.176973   22121 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:38:45.177059   22121 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:38:45.314163   22121 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:38:45.314301   22121 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:38:45.816204   22121 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.333685ms
	I0916 10:38:45.816311   22121 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:38:51.792476   22121 kubeadm.go:310] [api-check] The API server is healthy after 5.978803709s
	I0916 10:38:51.807629   22121 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:38:51.827911   22121 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:38:51.862228   22121 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:38:51.862446   22121 kubeadm.go:310] [mark-control-plane] Marking the node ha-244475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:38:51.880371   22121 kubeadm.go:310] [bootstrap-token] Using token: z03lik.8myj2g1lawnpsxwz
	I0916 10:38:51.881728   22121 out.go:235]   - Configuring RBAC rules ...
	I0916 10:38:51.881867   22121 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:38:51.892035   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:38:51.905643   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:38:51.910644   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:38:51.914471   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:38:51.919085   22121 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:38:52.199036   22121 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:38:52.641913   22121 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:38:53.198817   22121 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:38:53.200731   22121 kubeadm.go:310] 
	I0916 10:38:53.200796   22121 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:38:53.200801   22121 kubeadm.go:310] 
	I0916 10:38:53.200897   22121 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:38:53.200923   22121 kubeadm.go:310] 
	I0916 10:38:53.200967   22121 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:38:53.201048   22121 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:38:53.201151   22121 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:38:53.201169   22121 kubeadm.go:310] 
	I0916 10:38:53.201241   22121 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:38:53.201252   22121 kubeadm.go:310] 
	I0916 10:38:53.201327   22121 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:38:53.201342   22121 kubeadm.go:310] 
	I0916 10:38:53.201417   22121 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:38:53.201524   22121 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:38:53.201620   22121 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:38:53.201636   22121 kubeadm.go:310] 
	I0916 10:38:53.201729   22121 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:38:53.201854   22121 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:38:53.201865   22121 kubeadm.go:310] 
	I0916 10:38:53.201980   22121 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z03lik.8myj2g1lawnpsxwz \
	I0916 10:38:53.202117   22121 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:38:53.202140   22121 kubeadm.go:310] 	--control-plane 
	I0916 10:38:53.202144   22121 kubeadm.go:310] 
	I0916 10:38:53.202267   22121 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:38:53.202284   22121 kubeadm.go:310] 
	I0916 10:38:53.202396   22121 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z03lik.8myj2g1lawnpsxwz \
	I0916 10:38:53.202519   22121 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:38:53.204612   22121 kubeadm.go:310] W0916 10:38:42.823368     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:38:53.204909   22121 kubeadm.go:310] W0916 10:38:42.824196     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:38:53.205016   22121 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:38:53.205039   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:53.205046   22121 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:38:53.206707   22121 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:38:53.207859   22121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:38:53.213780   22121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:38:53.213797   22121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:38:53.232952   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:38:53.644721   22121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:38:53.644772   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:53.644775   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475 minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=true
	I0916 10:38:53.828940   22121 ops.go:34] apiserver oom_adj: -16
	I0916 10:38:53.829033   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:54.329149   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:54.829567   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:55.329641   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:55.829630   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:56.329847   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:56.829468   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:57.329221   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:57.464394   22121 kubeadm.go:1113] duration metric: took 3.819679278s to wait for elevateKubeSystemPrivileges
	I0916 10:38:57.464429   22121 kubeadm.go:394] duration metric: took 14.873616788s to StartCluster
	I0916 10:38:57.464458   22121 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:57.464557   22121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:57.465226   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:57.465443   22121 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:57.465469   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:38:57.465470   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:38:57.465485   22121 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:38:57.465569   22121 addons.go:69] Setting storage-provisioner=true in profile "ha-244475"
	I0916 10:38:57.465585   22121 addons.go:69] Setting default-storageclass=true in profile "ha-244475"
	I0916 10:38:57.465603   22121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-244475"
	I0916 10:38:57.465609   22121 addons.go:234] Setting addon storage-provisioner=true in "ha-244475"
	I0916 10:38:57.465634   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:38:57.465683   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:57.466032   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.466071   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.466075   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.466116   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.481103   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0916 10:38:57.481138   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0916 10:38:57.481582   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.481618   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.482091   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.482118   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.482234   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.482258   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.482437   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.482607   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.482769   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.483070   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.483111   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.484929   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:57.485193   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:38:57.485590   22121 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:38:57.485818   22121 addons.go:234] Setting addon default-storageclass=true in "ha-244475"
	I0916 10:38:57.485861   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:38:57.486134   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.486172   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.498299   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0916 10:38:57.498828   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.499447   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.499474   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.499850   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.500054   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.500552   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0916 10:38:57.500918   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.501427   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.501446   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.501839   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:57.501908   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.502610   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.502657   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.503651   22121 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:38:57.504966   22121 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:38:57.504987   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:38:57.505003   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:57.508156   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.508589   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:57.508615   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.508829   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:57.508992   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:57.509171   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:57.509294   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:57.518682   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0916 10:38:57.519147   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.519675   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.519702   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.520007   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.520169   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.521733   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:57.521948   22121 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:38:57.521971   22121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:38:57.521995   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:57.524943   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.525414   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:57.525441   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.525578   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:57.525724   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:57.525845   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:57.525926   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:57.660884   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:38:57.725204   22121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:38:57.781501   22121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:38:58.313582   22121 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:38:58.587280   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587305   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587383   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587408   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587584   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587596   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587649   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587677   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587686   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587689   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587706   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587679   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587713   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587722   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587906   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587935   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587948   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587979   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.588055   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.588073   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.588171   22121 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:38:58.588199   22121 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:38:58.588294   22121 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:38:58.588300   22121 round_trippers.go:469] Request Headers:
	I0916 10:38:58.588310   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.588315   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.605995   22121 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0916 10:38:58.606551   22121 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:38:58.606569   22121 round_trippers.go:469] Request Headers:
	I0916 10:38:58.606579   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.606584   22121 round_trippers.go:473]     Content-Type: application/json
	I0916 10:38:58.606587   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.610730   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:58.610908   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.610929   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.611167   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.611207   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.611219   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.612831   22121 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:38:58.614176   22121 addons.go:510] duration metric: took 1.1486947s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:38:58.614214   22121 start.go:246] waiting for cluster config update ...
	I0916 10:38:58.614228   22121 start.go:255] writing updated cluster config ...
	I0916 10:38:58.615876   22121 out.go:201] 
	I0916 10:38:58.617218   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:58.617303   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:58.618897   22121 out.go:177] * Starting "ha-244475-m02" control-plane node in "ha-244475" cluster
	I0916 10:38:58.620429   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:58.620447   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:58.620539   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:58.620553   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:58.620632   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:58.620820   22121 start.go:360] acquireMachinesLock for ha-244475-m02: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:38:58.620867   22121 start.go:364] duration metric: took 27.412µs to acquireMachinesLock for "ha-244475-m02"
	I0916 10:38:58.620892   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:58.620984   22121 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 10:38:58.622503   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:38:58.622584   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:58.622615   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:58.638413   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0916 10:38:58.638950   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:58.639464   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:58.639492   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:58.639818   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:58.640042   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:38:58.640214   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:38:58.640380   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:38:58.640411   22121 client.go:168] LocalClient.Create starting
	I0916 10:38:58.640444   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:38:58.640482   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:58.640501   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:58.640575   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:38:58.640600   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:58.640616   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:58.640639   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:38:58.640650   22121 main.go:141] libmachine: (ha-244475-m02) Calling .PreCreateCheck
	I0916 10:38:58.640820   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:38:58.641229   22121 main.go:141] libmachine: Creating machine...
	I0916 10:38:58.641245   22121 main.go:141] libmachine: (ha-244475-m02) Calling .Create
	I0916 10:38:58.641375   22121 main.go:141] libmachine: (ha-244475-m02) Creating KVM machine...
	I0916 10:38:58.642569   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found existing default KVM network
	I0916 10:38:58.642747   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found existing private KVM network mk-ha-244475
	I0916 10:38:58.642926   22121 main.go:141] libmachine: (ha-244475-m02) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 ...
	I0916 10:38:58.642950   22121 main.go:141] libmachine: (ha-244475-m02) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:38:58.643021   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.642905   22483 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:58.643109   22121 main.go:141] libmachine: (ha-244475-m02) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:38:58.883746   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.883623   22483 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa...
	I0916 10:38:58.990233   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.990092   22483 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/ha-244475-m02.rawdisk...
	I0916 10:38:58.990284   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Writing magic tar header
	I0916 10:38:58.990302   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Writing SSH key tar header
	I0916 10:38:58.990319   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.990203   22483 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 ...
	I0916 10:38:58.990329   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 (perms=drwx------)
	I0916 10:38:58.990341   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02
	I0916 10:38:58.990351   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:38:58.990359   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:58.990365   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:38:58.990378   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:38:58.990388   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:38:58.990411   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:38:58.990419   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:38:58.990427   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home
	I0916 10:38:58.990435   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Skipping /home - not owner
	I0916 10:38:58.990446   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:38:58.990454   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:38:58.990465   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:38:58.990475   22121 main.go:141] libmachine: (ha-244475-m02) Creating domain...
	I0916 10:38:58.991326   22121 main.go:141] libmachine: (ha-244475-m02) define libvirt domain using xml: 
	I0916 10:38:58.991351   22121 main.go:141] libmachine: (ha-244475-m02) <domain type='kvm'>
	I0916 10:38:58.991380   22121 main.go:141] libmachine: (ha-244475-m02)   <name>ha-244475-m02</name>
	I0916 10:38:58.991401   22121 main.go:141] libmachine: (ha-244475-m02)   <memory unit='MiB'>2200</memory>
	I0916 10:38:58.991408   22121 main.go:141] libmachine: (ha-244475-m02)   <vcpu>2</vcpu>
	I0916 10:38:58.991417   22121 main.go:141] libmachine: (ha-244475-m02)   <features>
	I0916 10:38:58.991441   22121 main.go:141] libmachine: (ha-244475-m02)     <acpi/>
	I0916 10:38:58.991459   22121 main.go:141] libmachine: (ha-244475-m02)     <apic/>
	I0916 10:38:58.991465   22121 main.go:141] libmachine: (ha-244475-m02)     <pae/>
	I0916 10:38:58.991472   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991477   22121 main.go:141] libmachine: (ha-244475-m02)   </features>
	I0916 10:38:58.991482   22121 main.go:141] libmachine: (ha-244475-m02)   <cpu mode='host-passthrough'>
	I0916 10:38:58.991489   22121 main.go:141] libmachine: (ha-244475-m02)   
	I0916 10:38:58.991504   22121 main.go:141] libmachine: (ha-244475-m02)   </cpu>
	I0916 10:38:58.991512   22121 main.go:141] libmachine: (ha-244475-m02)   <os>
	I0916 10:38:58.991516   22121 main.go:141] libmachine: (ha-244475-m02)     <type>hvm</type>
	I0916 10:38:58.991523   22121 main.go:141] libmachine: (ha-244475-m02)     <boot dev='cdrom'/>
	I0916 10:38:58.991528   22121 main.go:141] libmachine: (ha-244475-m02)     <boot dev='hd'/>
	I0916 10:38:58.991535   22121 main.go:141] libmachine: (ha-244475-m02)     <bootmenu enable='no'/>
	I0916 10:38:58.991546   22121 main.go:141] libmachine: (ha-244475-m02)   </os>
	I0916 10:38:58.991554   22121 main.go:141] libmachine: (ha-244475-m02)   <devices>
	I0916 10:38:58.991559   22121 main.go:141] libmachine: (ha-244475-m02)     <disk type='file' device='cdrom'>
	I0916 10:38:58.991569   22121 main.go:141] libmachine: (ha-244475-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/boot2docker.iso'/>
	I0916 10:38:58.991574   22121 main.go:141] libmachine: (ha-244475-m02)       <target dev='hdc' bus='scsi'/>
	I0916 10:38:58.991581   22121 main.go:141] libmachine: (ha-244475-m02)       <readonly/>
	I0916 10:38:58.991585   22121 main.go:141] libmachine: (ha-244475-m02)     </disk>
	I0916 10:38:58.991590   22121 main.go:141] libmachine: (ha-244475-m02)     <disk type='file' device='disk'>
	I0916 10:38:58.991596   22121 main.go:141] libmachine: (ha-244475-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:38:58.991603   22121 main.go:141] libmachine: (ha-244475-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/ha-244475-m02.rawdisk'/>
	I0916 10:38:58.991611   22121 main.go:141] libmachine: (ha-244475-m02)       <target dev='hda' bus='virtio'/>
	I0916 10:38:58.991615   22121 main.go:141] libmachine: (ha-244475-m02)     </disk>
	I0916 10:38:58.991620   22121 main.go:141] libmachine: (ha-244475-m02)     <interface type='network'>
	I0916 10:38:58.991625   22121 main.go:141] libmachine: (ha-244475-m02)       <source network='mk-ha-244475'/>
	I0916 10:38:58.991630   22121 main.go:141] libmachine: (ha-244475-m02)       <model type='virtio'/>
	I0916 10:38:58.991637   22121 main.go:141] libmachine: (ha-244475-m02)     </interface>
	I0916 10:38:58.991643   22121 main.go:141] libmachine: (ha-244475-m02)     <interface type='network'>
	I0916 10:38:58.991649   22121 main.go:141] libmachine: (ha-244475-m02)       <source network='default'/>
	I0916 10:38:58.991655   22121 main.go:141] libmachine: (ha-244475-m02)       <model type='virtio'/>
	I0916 10:38:58.991658   22121 main.go:141] libmachine: (ha-244475-m02)     </interface>
	I0916 10:38:58.991663   22121 main.go:141] libmachine: (ha-244475-m02)     <serial type='pty'>
	I0916 10:38:58.991667   22121 main.go:141] libmachine: (ha-244475-m02)       <target port='0'/>
	I0916 10:38:58.991672   22121 main.go:141] libmachine: (ha-244475-m02)     </serial>
	I0916 10:38:58.991681   22121 main.go:141] libmachine: (ha-244475-m02)     <console type='pty'>
	I0916 10:38:58.991692   22121 main.go:141] libmachine: (ha-244475-m02)       <target type='serial' port='0'/>
	I0916 10:38:58.991703   22121 main.go:141] libmachine: (ha-244475-m02)     </console>
	I0916 10:38:58.991728   22121 main.go:141] libmachine: (ha-244475-m02)     <rng model='virtio'>
	I0916 10:38:58.991756   22121 main.go:141] libmachine: (ha-244475-m02)       <backend model='random'>/dev/random</backend>
	I0916 10:38:58.991766   22121 main.go:141] libmachine: (ha-244475-m02)     </rng>
	I0916 10:38:58.991772   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991779   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991792   22121 main.go:141] libmachine: (ha-244475-m02)   </devices>
	I0916 10:38:58.991801   22121 main.go:141] libmachine: (ha-244475-m02) </domain>
	I0916 10:38:58.991810   22121 main.go:141] libmachine: (ha-244475-m02) 
	I0916 10:38:58.998246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:b1:66:ac in network default
	I0916 10:38:58.998886   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring networks are active...
	I0916 10:38:58.998906   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:38:58.999650   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring network default is active
	I0916 10:38:59.000011   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring network mk-ha-244475 is active
	I0916 10:38:59.000423   22121 main.go:141] libmachine: (ha-244475-m02) Getting domain xml...
	I0916 10:38:59.001200   22121 main.go:141] libmachine: (ha-244475-m02) Creating domain...
	I0916 10:39:00.217897   22121 main.go:141] libmachine: (ha-244475-m02) Waiting to get IP...
	I0916 10:39:00.218668   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.219076   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.219122   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.219065   22483 retry.go:31] will retry after 199.814892ms: waiting for machine to come up
	I0916 10:39:00.420559   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.421001   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.421022   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.420966   22483 retry.go:31] will retry after 240.671684ms: waiting for machine to come up
	I0916 10:39:00.663384   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.663824   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.663846   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.663767   22483 retry.go:31] will retry after 337.97981ms: waiting for machine to come up
	I0916 10:39:01.003494   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:01.003942   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:01.003971   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:01.003897   22483 retry.go:31] will retry after 519.568797ms: waiting for machine to come up
	I0916 10:39:01.524619   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:01.525114   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:01.525169   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:01.525043   22483 retry.go:31] will retry after 742.703365ms: waiting for machine to come up
	I0916 10:39:02.268894   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:02.269275   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:02.269302   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:02.269246   22483 retry.go:31] will retry after 918.427714ms: waiting for machine to come up
	I0916 10:39:03.189424   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:03.189835   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:03.189858   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:03.189810   22483 retry.go:31] will retry after 1.026136416s: waiting for machine to come up
	I0916 10:39:04.217246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:04.217734   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:04.217759   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:04.217669   22483 retry.go:31] will retry after 1.280806759s: waiting for machine to come up
	I0916 10:39:05.500057   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:05.500485   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:05.500513   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:05.500426   22483 retry.go:31] will retry after 1.764059222s: waiting for machine to come up
	I0916 10:39:07.266224   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:07.266648   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:07.266668   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:07.266605   22483 retry.go:31] will retry after 1.834210088s: waiting for machine to come up
	I0916 10:39:09.102726   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:09.103221   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:09.103251   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:09.103165   22483 retry.go:31] will retry after 2.739410036s: waiting for machine to come up
	I0916 10:39:11.846017   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:11.846530   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:11.846564   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:11.846474   22483 retry.go:31] will retry after 2.779311539s: waiting for machine to come up
	I0916 10:39:14.627940   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:14.628351   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:14.628379   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:14.628315   22483 retry.go:31] will retry after 2.793801544s: waiting for machine to come up
	I0916 10:39:17.425154   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:17.425563   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:17.425580   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:17.425530   22483 retry.go:31] will retry after 3.470690334s: waiting for machine to come up
	I0916 10:39:20.899627   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.900073   22121 main.go:141] libmachine: (ha-244475-m02) Found IP for machine: 192.168.39.222
	I0916 10:39:20.900093   22121 main.go:141] libmachine: (ha-244475-m02) Reserving static IP address...
	I0916 10:39:20.900106   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.900473   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find host DHCP lease matching {name: "ha-244475-m02", mac: "52:54:00:ed:fc:95", ip: "192.168.39.222"} in network mk-ha-244475
	I0916 10:39:20.972758   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Getting to WaitForSSH function...
	I0916 10:39:20.972786   22121 main.go:141] libmachine: (ha-244475-m02) Reserved static IP address: 192.168.39.222
	I0916 10:39:20.972795   22121 main.go:141] libmachine: (ha-244475-m02) Waiting for SSH to be available...
	I0916 10:39:20.975117   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.975582   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:20.975610   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.975773   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using SSH client type: external
	I0916 10:39:20.975792   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa (-rw-------)
	I0916 10:39:20.975827   22121 main.go:141] libmachine: (ha-244475-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:39:20.975839   22121 main.go:141] libmachine: (ha-244475-m02) DBG | About to run SSH command:
	I0916 10:39:20.975859   22121 main.go:141] libmachine: (ha-244475-m02) DBG | exit 0
	I0916 10:39:21.101388   22121 main.go:141] libmachine: (ha-244475-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 10:39:21.101625   22121 main.go:141] libmachine: (ha-244475-m02) KVM machine creation complete!
	I0916 10:39:21.101972   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:39:21.102551   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:21.102707   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:21.102833   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:39:21.102843   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:39:21.103989   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:39:21.104000   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:39:21.104005   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:39:21.104010   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.106164   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.106508   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.106551   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.106712   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.106893   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.107044   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.107170   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.107317   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.107566   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.107579   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:39:21.208324   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:39:21.208347   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:39:21.208354   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.211146   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.211537   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.211559   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.211725   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.211895   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.212034   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.212154   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.212326   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.212516   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.212530   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:39:21.313838   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:39:21.313941   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:39:21.313956   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:39:21.313968   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.314202   22121 buildroot.go:166] provisioning hostname "ha-244475-m02"
	I0916 10:39:21.314225   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.314348   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.316988   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.317383   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.317407   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.317573   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.317722   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.317830   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.317925   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.318068   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.318243   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.318255   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475-m02 && echo "ha-244475-m02" | sudo tee /etc/hostname
	I0916 10:39:21.435511   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475-m02
	
	I0916 10:39:21.435550   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.438718   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.439163   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.439205   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.439382   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.439582   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.439737   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.439947   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.440129   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.440341   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.440367   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:39:21.550458   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:39:21.550490   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:39:21.550529   22121 buildroot.go:174] setting up certificates
	I0916 10:39:21.550538   22121 provision.go:84] configureAuth start
	I0916 10:39:21.550547   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.550825   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:21.553187   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.553518   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.553543   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.553719   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.555867   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.556227   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.556254   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.556377   22121 provision.go:143] copyHostCerts
	I0916 10:39:21.556404   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:39:21.556435   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:39:21.556445   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:39:21.556501   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:39:21.557003   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:39:21.557062   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:39:21.557069   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:39:21.557114   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:39:21.557194   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:39:21.557215   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:39:21.557221   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:39:21.557251   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:39:21.557313   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475-m02 san=[127.0.0.1 192.168.39.222 ha-244475-m02 localhost minikube]
	I0916 10:39:21.676307   22121 provision.go:177] copyRemoteCerts
	I0916 10:39:21.676359   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:39:21.676383   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.679208   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.679543   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.679570   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.679736   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.679929   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.680073   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.680198   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:21.759911   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:39:21.759973   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:39:21.784754   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:39:21.784831   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:39:21.808848   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:39:21.808934   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:39:21.832713   22121 provision.go:87] duration metric: took 282.161069ms to configureAuth
	I0916 10:39:21.832745   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:39:21.832966   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:21.833035   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.835844   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.836194   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.836220   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.836405   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.836587   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.836747   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.836869   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.836973   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.837163   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.837187   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:39:22.055982   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:39:22.056004   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:39:22.056012   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetURL
	I0916 10:39:22.057317   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using libvirt version 6000000
	I0916 10:39:22.059932   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.060270   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.060291   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.060472   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:39:22.060481   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:39:22.060487   22121 client.go:171] duration metric: took 23.42006819s to LocalClient.Create
	I0916 10:39:22.060508   22121 start.go:167] duration metric: took 23.420129046s to libmachine.API.Create "ha-244475"
	I0916 10:39:22.060521   22121 start.go:293] postStartSetup for "ha-244475-m02" (driver="kvm2")
	I0916 10:39:22.060537   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:39:22.060553   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.060804   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:39:22.060831   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.062903   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.063181   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.063208   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.063341   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.063491   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.063705   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.063813   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.145615   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:39:22.150644   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:39:22.150671   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:39:22.150732   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:39:22.150808   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:39:22.150817   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:39:22.150906   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:39:22.162177   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:39:22.188876   22121 start.go:296] duration metric: took 128.339893ms for postStartSetup
	I0916 10:39:22.188928   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:39:22.189609   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:22.191896   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.192212   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.192246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.192461   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:39:22.192662   22121 start.go:128] duration metric: took 23.571667259s to createHost
	I0916 10:39:22.192687   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.194553   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.194806   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.194832   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.194956   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.195125   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.195252   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.195352   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.195512   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:22.195697   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:22.195714   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:39:22.298260   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483162.257238661
	
	I0916 10:39:22.298294   22121 fix.go:216] guest clock: 1726483162.257238661
	I0916 10:39:22.298303   22121 fix.go:229] Guest: 2024-09-16 10:39:22.257238661 +0000 UTC Remote: 2024-09-16 10:39:22.192675095 +0000 UTC m=+70.025440848 (delta=64.563566ms)
	I0916 10:39:22.298325   22121 fix.go:200] guest clock delta is within tolerance: 64.563566ms
	I0916 10:39:22.298332   22121 start.go:83] releasing machines lock for "ha-244475-m02", held for 23.677456654s
	I0916 10:39:22.298361   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.298605   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:22.301224   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.301602   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.301623   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.303467   22121 out.go:177] * Found network options:
	I0916 10:39:22.304869   22121 out.go:177]   - NO_PROXY=192.168.39.19
	W0916 10:39:22.306210   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:39:22.306239   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.306761   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.306940   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.307022   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:39:22.307050   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	W0916 10:39:22.307076   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:39:22.307148   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:39:22.307170   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.309796   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.309995   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310175   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.310201   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310319   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.310427   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.310453   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310476   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.310594   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.310660   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.310713   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.310788   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.310823   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.310950   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.543814   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:39:22.550133   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:39:22.550202   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:39:22.567275   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:39:22.567305   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:39:22.567376   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:39:22.584656   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:39:22.599498   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:39:22.599566   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:39:22.614104   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:39:22.628372   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:39:22.744286   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:39:22.898472   22121 docker.go:233] disabling docker service ...
	I0916 10:39:22.898553   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:39:22.913618   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:39:22.927202   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:39:23.051522   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:39:23.182181   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:39:23.204179   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:39:23.225362   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:39:23.225448   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.237074   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:39:23.237150   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.247895   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.258393   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.269419   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:39:23.279779   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.291172   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.311053   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.322116   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:39:23.332200   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:39:23.332250   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:39:23.344994   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:39:23.355782   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:23.481218   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:39:23.579230   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:39:23.579298   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:39:23.584697   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:39:23.584741   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:39:23.588596   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:39:23.641205   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:39:23.641281   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:39:23.671177   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:39:23.702253   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:39:23.703479   22121 out.go:177]   - env NO_PROXY=192.168.39.19
	I0916 10:39:23.704928   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:23.707459   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:23.707795   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:23.707824   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:23.708043   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:39:23.712363   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:39:23.725265   22121 mustload.go:65] Loading cluster: ha-244475
	I0916 10:39:23.725441   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:23.725687   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:23.725721   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:23.740417   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0916 10:39:23.740990   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:23.741466   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:23.741488   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:23.741810   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:23.742008   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:39:23.743510   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:39:23.743856   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:23.743896   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:23.759264   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I0916 10:39:23.759649   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:23.760026   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:23.760042   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:23.760318   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:23.760486   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:39:23.760651   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.222
	I0916 10:39:23.760665   22121 certs.go:194] generating shared ca certs ...
	I0916 10:39:23.760682   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.760796   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:39:23.760834   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:39:23.760847   22121 certs.go:256] generating profile certs ...
	I0916 10:39:23.760915   22121 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:39:23.760938   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a
	I0916 10:39:23.760949   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.254]
	I0916 10:39:23.971738   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a ...
	I0916 10:39:23.971765   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a: {Name:mk37a27280aa796084417d4aec0944fb7177392b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.971967   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a ...
	I0916 10:39:23.971985   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a: {Name:mkb5d769612983e338b6def0cc291fa133a3ff90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.972081   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:39:23.972210   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:39:23.972334   22121 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:39:23.972348   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:39:23.972360   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:39:23.972373   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:39:23.972388   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:39:23.972400   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:39:23.972412   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:39:23.972424   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:39:23.972437   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:39:23.972477   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:39:23.972504   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:39:23.972513   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:39:23.972536   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:39:23.972556   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:39:23.972577   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:39:23.972612   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:39:23.972638   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:39:23.972651   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:39:23.972663   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:23.972694   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:39:23.975828   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:23.976221   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:39:23.976248   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:23.976413   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:39:23.976620   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:39:23.976774   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:39:23.976882   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:39:24.053497   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:39:24.058424   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:39:24.070223   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:39:24.074933   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 10:39:24.085348   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:39:24.089709   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:39:24.102091   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:39:24.106076   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:39:24.123270   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:39:24.127635   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:39:24.138409   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:39:24.142528   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:39:24.158176   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:39:24.183770   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:39:24.210708   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:39:24.237895   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:39:24.265068   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:39:24.289021   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:39:24.312480   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:39:24.336502   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:39:24.360309   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:39:24.383990   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:39:24.408205   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:39:24.432243   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:39:24.449793   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 10:39:24.467290   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:39:24.484273   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:39:24.501648   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:39:24.519020   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:39:24.535943   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:39:24.552390   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:39:24.558138   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:39:24.568860   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.574154   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.574204   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.580119   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:39:24.592339   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:39:24.604511   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.609097   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.609171   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.615026   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:39:24.625768   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:39:24.636379   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.640871   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.640920   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.646395   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:39:24.656801   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:39:24.661571   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:39:24.661615   22121 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0916 10:39:24.661689   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:39:24.661712   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:39:24.661745   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:39:24.679303   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:39:24.679364   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:39:24.679410   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:39:24.689055   22121 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:39:24.689100   22121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:39:24.698937   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:39:24.698963   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:39:24.699025   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:39:24.699054   22121 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 10:39:24.699062   22121 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 10:39:24.703600   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 10:39:24.703633   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:39:25.360517   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:39:25.360604   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:39:25.365737   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 10:39:25.365769   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:39:25.520604   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:39:25.561216   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:39:25.561328   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:39:25.578620   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 10:39:25.578664   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:39:25.943225   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:39:25.953425   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:39:25.971005   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:39:25.987923   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:39:26.005037   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:39:26.008989   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:39:26.022651   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:26.139506   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:39:26.156924   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:39:26.157320   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:26.157358   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:26.173843   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41439
	I0916 10:39:26.174382   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:26.174982   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:26.175008   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:26.175329   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:26.175507   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:39:26.175651   22121 start.go:317] joinCluster: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:39:26.175759   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:39:26.175773   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:39:26.178960   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:26.179415   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:39:26.179439   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:26.179692   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:39:26.179878   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:39:26.180020   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:39:26.180170   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:39:26.331689   22121 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:39:26.331744   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvzo4h.p3o4vz89426q0tzd --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0916 10:39:46.581278   22121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvzo4h.p3o4vz89426q0tzd --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (20.249509056s)
	I0916 10:39:46.581311   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:39:47.185857   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475-m02 minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=false
	I0916 10:39:47.323615   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-244475-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:39:47.452689   22121 start.go:319] duration metric: took 21.277032539s to joinCluster
	I0916 10:39:47.452767   22121 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:39:47.453074   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:47.454538   22121 out.go:177] * Verifying Kubernetes components...
	I0916 10:39:47.455883   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:47.719826   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:39:47.771692   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:39:47.771937   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:39:47.771997   22121 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I0916 10:39:47.772181   22121 node_ready.go:35] waiting up to 6m0s for node "ha-244475-m02" to be "Ready" ...
	I0916 10:39:47.772291   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:47.772301   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:47.772311   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:47.772317   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:47.784039   22121 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0916 10:39:48.272953   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:48.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:48.272981   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:48.272992   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:48.276331   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:48.772467   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:48.772487   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:48.772495   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:48.772499   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:48.778807   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:39:49.272650   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:49.272673   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:49.272683   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:49.272688   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:49.277698   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:49.773047   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:49.773069   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:49.773079   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:49.773085   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:49.909815   22121 round_trippers.go:574] Response Status: 200 OK in 136 milliseconds
	I0916 10:39:49.910692   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:50.272950   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:50.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:50.272982   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:50.272987   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:50.277990   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:50.773159   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:50.773185   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:50.773196   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:50.773202   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:50.777386   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:51.273263   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:51.273286   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:51.273294   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:51.273300   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:51.277667   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:51.772471   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:51.772493   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:51.772502   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:51.772508   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:51.775526   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:52.272463   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:52.272487   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:52.272504   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:52.272510   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:52.276001   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:52.276862   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:52.772568   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:52.772591   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:52.772598   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:52.772603   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:52.775666   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:53.272574   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:53.272605   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:53.272614   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:53.272617   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:53.275866   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:53.773034   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:53.773057   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:53.773065   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:53.773069   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:53.910868   22121 round_trippers.go:574] Response Status: 200 OK in 137 milliseconds
	I0916 10:39:54.272908   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:54.272929   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:54.272937   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:54.272940   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:54.276365   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:54.276998   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:54.772373   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:54.772404   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:54.772412   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:54.772415   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:54.775406   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:55.272580   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:55.272602   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:55.272610   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:55.272614   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:55.275678   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:55.772739   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:55.772762   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:55.772769   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:55.772773   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:55.776656   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.273183   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:56.273204   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:56.273211   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:56.273216   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:56.276356   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.773388   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:56.773413   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:56.773426   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:56.773433   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:56.776782   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.777386   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:57.272950   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:57.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:57.272979   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:57.272984   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:57.276364   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:57.773060   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:57.773081   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:57.773088   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:57.773092   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:57.776229   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:58.273206   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:58.273236   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:58.273248   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:58.273255   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:58.277169   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:58.773306   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:58.773325   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:58.773333   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:58.773336   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:58.776530   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:59.272613   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:59.272637   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:59.272647   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:59.272653   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:59.277029   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:59.277431   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:59.772793   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:59.772817   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:59.772825   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:59.772829   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:59.776206   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:00.273273   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:00.273295   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:00.273308   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:00.273314   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:00.276740   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:00.772818   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:00.772841   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:00.772851   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:00.772857   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:00.776328   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:01.273273   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:01.273295   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:01.273304   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:01.273307   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:01.276670   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:01.772774   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:01.772805   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:01.772817   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:01.772824   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:01.777379   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:01.777815   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:40:02.273195   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:02.273218   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:02.273226   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:02.273231   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:02.276605   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:02.773027   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:02.773049   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:02.773057   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:02.773062   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:02.776120   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:03.273168   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:03.273191   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:03.273199   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:03.273206   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:03.276412   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:03.773044   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:03.773066   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:03.773074   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:03.773079   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:03.776511   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:04.272779   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:04.272803   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:04.272810   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:04.272814   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:04.276171   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:04.276879   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:40:04.773259   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:04.773284   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:04.773291   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:04.773295   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:04.776687   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.272635   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.272667   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.272678   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.272687   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.275813   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.772434   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.772459   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.772469   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.772474   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.776455   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.777067   22121 node_ready.go:49] node "ha-244475-m02" has status "Ready":"True"
	I0916 10:40:05.777086   22121 node_ready.go:38] duration metric: took 18.004873295s for node "ha-244475-m02" to be "Ready" ...
	I0916 10:40:05.777095   22121 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:40:05.777206   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:05.777219   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.777229   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.777240   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.781640   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:05.787776   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.787847   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lzrg2
	I0916 10:40:05.787856   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.787863   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.787867   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.791078   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.791756   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.791771   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.791778   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.791784   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.794551   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.795202   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.795218   22121 pod_ready.go:82] duration metric: took 7.419929ms for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.795226   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.795282   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-m8fd7
	I0916 10:40:05.795290   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.795297   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.795302   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.798095   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.798774   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.798790   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.798797   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.798801   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.801421   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.801924   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.801938   22121 pod_ready.go:82] duration metric: took 6.704952ms for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.801945   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.801989   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475
	I0916 10:40:05.801997   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.802004   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.802008   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.804181   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.804710   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.804724   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.804730   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.804733   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.807387   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.808293   22121 pod_ready.go:93] pod "etcd-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.808307   22121 pod_ready.go:82] duration metric: took 6.357107ms for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.808315   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.808358   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m02
	I0916 10:40:05.808365   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.808372   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.808377   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.810955   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.811488   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.811500   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.811508   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.811512   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.814011   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.814463   22121 pod_ready.go:93] pod "etcd-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.814477   22121 pod_ready.go:82] duration metric: took 6.157572ms for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.814489   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.972835   22121 request.go:632] Waited for 158.29387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:40:05.972902   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:40:05.972922   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.972933   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.972943   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.976765   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.172937   22121 request.go:632] Waited for 195.355279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.172986   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.172992   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.172998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.173002   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.177033   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:06.177621   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.177640   22121 pod_ready.go:82] duration metric: took 363.14475ms for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.177648   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.373192   22121 request.go:632] Waited for 195.483207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:40:06.373244   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:40:06.373249   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.373257   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.373261   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.377043   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.573053   22121 request.go:632] Waited for 195.35028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:06.573108   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:06.573115   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.573136   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.573147   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.577118   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.577677   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.577694   22121 pod_ready.go:82] duration metric: took 400.039517ms for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.577703   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.772876   22121 request.go:632] Waited for 195.103028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:40:06.772951   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:40:06.772956   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.772964   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.772969   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.776182   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.973323   22121 request.go:632] Waited for 196.373099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.973376   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.973381   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.973387   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.973392   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.976489   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.977163   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.977180   22121 pod_ready.go:82] duration metric: took 399.471495ms for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.977190   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.173212   22121 request.go:632] Waited for 195.956208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:40:07.173293   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:40:07.173301   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.173312   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.173319   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.177006   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.373012   22121 request.go:632] Waited for 195.452852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:07.373136   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:07.373147   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.373157   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.373166   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.376520   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.376939   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:07.376955   22121 pod_ready.go:82] duration metric: took 399.760125ms for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.376963   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.573324   22121 request.go:632] Waited for 196.271916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:40:07.573394   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:40:07.573402   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.573413   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.573420   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.577193   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.773425   22121 request.go:632] Waited for 195.35678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:07.773476   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:07.773482   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.773488   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.773492   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.776987   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.777804   22121 pod_ready.go:93] pod "kube-proxy-crttt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:07.777823   22121 pod_ready.go:82] duration metric: took 400.853941ms for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.777832   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.972928   22121 request.go:632] Waited for 195.015591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:40:07.972986   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:40:07.972991   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.972998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.973004   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.976127   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.173342   22121 request.go:632] Waited for 196.327773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.173412   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.173420   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.173427   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.173433   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.177112   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.177778   22121 pod_ready.go:93] pod "kube-proxy-t454b" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.177799   22121 pod_ready.go:82] duration metric: took 399.960678ms for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.177812   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.372853   22121 request.go:632] Waited for 194.970978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:40:08.372917   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:40:08.372922   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.372929   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.372936   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.375975   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.572928   22121 request.go:632] Waited for 196.373637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:08.572977   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:08.572982   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.572989   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.572993   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.576124   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.576671   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.576689   22121 pod_ready.go:82] duration metric: took 398.869844ms for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.576697   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.773179   22121 request.go:632] Waited for 196.418181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:40:08.773233   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:40:08.773253   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.773265   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.773280   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.776328   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.973400   22121 request.go:632] Waited for 196.398623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.973450   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.973455   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.973462   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.973468   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.977143   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.977768   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.977788   22121 pod_ready.go:82] duration metric: took 401.084234ms for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.977801   22121 pod_ready.go:39] duration metric: took 3.200692542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:40:08.977817   22121 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:40:08.977871   22121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:40:09.001036   22121 api_server.go:72] duration metric: took 21.548229005s to wait for apiserver process to appear ...
	I0916 10:40:09.001060   22121 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:40:09.001082   22121 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0916 10:40:09.007410   22121 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0916 10:40:09.007485   22121 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I0916 10:40:09.007496   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.007508   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.007518   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.008301   22121 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:40:09.008412   22121 api_server.go:141] control plane version: v1.31.1
	I0916 10:40:09.008429   22121 api_server.go:131] duration metric: took 7.361874ms to wait for apiserver health ...
	I0916 10:40:09.008439   22121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:40:09.172861   22121 request.go:632] Waited for 164.349636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.172946   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.172952   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.172965   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.172969   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.177801   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.182059   22121 system_pods.go:59] 17 kube-system pods found
	I0916 10:40:09.182087   22121 system_pods.go:61] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:40:09.182142   22121 system_pods.go:61] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:40:09.182160   22121 system_pods.go:61] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:40:09.182173   22121 system_pods.go:61] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:40:09.182179   22121 system_pods.go:61] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:40:09.182183   22121 system_pods.go:61] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:40:09.182187   22121 system_pods.go:61] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:40:09.182191   22121 system_pods.go:61] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:40:09.182195   22121 system_pods.go:61] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:40:09.182198   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:40:09.182201   22121 system_pods.go:61] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:40:09.182205   22121 system_pods.go:61] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:40:09.182210   22121 system_pods.go:61] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:40:09.182214   22121 system_pods.go:61] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:40:09.182217   22121 system_pods.go:61] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:40:09.182221   22121 system_pods.go:61] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:40:09.182228   22121 system_pods.go:61] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:40:09.182236   22121 system_pods.go:74] duration metric: took 173.790059ms to wait for pod list to return data ...
	I0916 10:40:09.182248   22121 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:40:09.372607   22121 request.go:632] Waited for 190.269868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:40:09.372663   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:40:09.372669   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.372683   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.372701   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.377213   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.377421   22121 default_sa.go:45] found service account: "default"
	I0916 10:40:09.377440   22121 default_sa.go:55] duration metric: took 195.180856ms for default service account to be created ...
	I0916 10:40:09.377449   22121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:40:09.572867   22121 request.go:632] Waited for 195.351388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.572951   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.572958   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.572968   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.572975   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.577144   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.582372   22121 system_pods.go:86] 17 kube-system pods found
	I0916 10:40:09.582396   22121 system_pods.go:89] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:40:09.582401   22121 system_pods.go:89] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:40:09.582405   22121 system_pods.go:89] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:40:09.582409   22121 system_pods.go:89] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:40:09.582413   22121 system_pods.go:89] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:40:09.582417   22121 system_pods.go:89] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:40:09.582420   22121 system_pods.go:89] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:40:09.582423   22121 system_pods.go:89] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:40:09.582427   22121 system_pods.go:89] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:40:09.582430   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:40:09.582433   22121 system_pods.go:89] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:40:09.582436   22121 system_pods.go:89] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:40:09.582439   22121 system_pods.go:89] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:40:09.582442   22121 system_pods.go:89] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:40:09.582445   22121 system_pods.go:89] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:40:09.582448   22121 system_pods.go:89] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:40:09.582452   22121 system_pods.go:89] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:40:09.582457   22121 system_pods.go:126] duration metric: took 205.002675ms to wait for k8s-apps to be running ...
	I0916 10:40:09.582465   22121 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:40:09.582506   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:09.597644   22121 system_svc.go:56] duration metric: took 15.160872ms WaitForService to wait for kubelet
	I0916 10:40:09.597677   22121 kubeadm.go:582] duration metric: took 22.144873804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:40:09.597698   22121 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:40:09.773108   22121 request.go:632] Waited for 175.336097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I0916 10:40:09.773176   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I0916 10:40:09.773183   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.773190   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.773195   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.776708   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:09.777452   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:40:09.777477   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:40:09.777490   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:40:09.777495   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:40:09.777501   22121 node_conditions.go:105] duration metric: took 179.797275ms to run NodePressure ...
	I0916 10:40:09.777515   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:40:09.777580   22121 start.go:255] writing updated cluster config ...
	I0916 10:40:09.779808   22121 out.go:201] 
	I0916 10:40:09.781239   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:09.781337   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:09.782835   22121 out.go:177] * Starting "ha-244475-m03" control-plane node in "ha-244475" cluster
	I0916 10:40:09.783977   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:40:09.783994   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:09.784082   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:40:09.784094   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:40:09.784186   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:09.784355   22121 start.go:360] acquireMachinesLock for ha-244475-m03: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:09.784415   22121 start.go:364] duration metric: took 40.424µs to acquireMachinesLock for "ha-244475-m03"
	I0916 10:40:09.784439   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:40:09.784543   22121 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 10:40:09.786219   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:40:09.786291   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:09.786324   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:09.801282   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35165
	I0916 10:40:09.801761   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:09.802231   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:09.802254   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:09.802548   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:09.802764   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:09.802865   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:09.802989   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:40:09.803017   22121 client.go:168] LocalClient.Create starting
	I0916 10:40:09.803051   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:40:09.803091   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:09.803118   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:09.803183   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:40:09.803210   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:09.803224   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:09.803249   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:40:09.803261   22121 main.go:141] libmachine: (ha-244475-m03) Calling .PreCreateCheck
	I0916 10:40:09.803404   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:09.803766   22121 main.go:141] libmachine: Creating machine...
	I0916 10:40:09.803781   22121 main.go:141] libmachine: (ha-244475-m03) Calling .Create
	I0916 10:40:09.803937   22121 main.go:141] libmachine: (ha-244475-m03) Creating KVM machine...
	I0916 10:40:09.805160   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found existing default KVM network
	I0916 10:40:09.805337   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found existing private KVM network mk-ha-244475
	I0916 10:40:09.805472   22121 main.go:141] libmachine: (ha-244475-m03) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 ...
	I0916 10:40:09.805493   22121 main.go:141] libmachine: (ha-244475-m03) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:40:09.805577   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:09.805472   22888 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:40:09.805636   22121 main.go:141] libmachine: (ha-244475-m03) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:40:10.039594   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.039469   22888 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa...
	I0916 10:40:10.482395   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.482296   22888 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/ha-244475-m03.rawdisk...
	I0916 10:40:10.482425   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Writing magic tar header
	I0916 10:40:10.482435   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Writing SSH key tar header
	I0916 10:40:10.482442   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.482411   22888 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 ...
	I0916 10:40:10.482520   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03
	I0916 10:40:10.482539   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 (perms=drwx------)
	I0916 10:40:10.482546   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:40:10.482562   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:40:10.482573   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:40:10.482582   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:40:10.482591   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:40:10.482605   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:40:10.482619   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:40:10.482631   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:40:10.482639   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:40:10.482649   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home
	I0916 10:40:10.482658   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:40:10.482668   22121 main.go:141] libmachine: (ha-244475-m03) Creating domain...
	I0916 10:40:10.482675   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Skipping /home - not owner
	I0916 10:40:10.483703   22121 main.go:141] libmachine: (ha-244475-m03) define libvirt domain using xml: 
	I0916 10:40:10.483728   22121 main.go:141] libmachine: (ha-244475-m03) <domain type='kvm'>
	I0916 10:40:10.483739   22121 main.go:141] libmachine: (ha-244475-m03)   <name>ha-244475-m03</name>
	I0916 10:40:10.483746   22121 main.go:141] libmachine: (ha-244475-m03)   <memory unit='MiB'>2200</memory>
	I0916 10:40:10.483755   22121 main.go:141] libmachine: (ha-244475-m03)   <vcpu>2</vcpu>
	I0916 10:40:10.483762   22121 main.go:141] libmachine: (ha-244475-m03)   <features>
	I0916 10:40:10.483767   22121 main.go:141] libmachine: (ha-244475-m03)     <acpi/>
	I0916 10:40:10.483774   22121 main.go:141] libmachine: (ha-244475-m03)     <apic/>
	I0916 10:40:10.483780   22121 main.go:141] libmachine: (ha-244475-m03)     <pae/>
	I0916 10:40:10.483786   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.483791   22121 main.go:141] libmachine: (ha-244475-m03)   </features>
	I0916 10:40:10.483799   22121 main.go:141] libmachine: (ha-244475-m03)   <cpu mode='host-passthrough'>
	I0916 10:40:10.483821   22121 main.go:141] libmachine: (ha-244475-m03)   
	I0916 10:40:10.483839   22121 main.go:141] libmachine: (ha-244475-m03)   </cpu>
	I0916 10:40:10.483851   22121 main.go:141] libmachine: (ha-244475-m03)   <os>
	I0916 10:40:10.483859   22121 main.go:141] libmachine: (ha-244475-m03)     <type>hvm</type>
	I0916 10:40:10.483867   22121 main.go:141] libmachine: (ha-244475-m03)     <boot dev='cdrom'/>
	I0916 10:40:10.483882   22121 main.go:141] libmachine: (ha-244475-m03)     <boot dev='hd'/>
	I0916 10:40:10.483893   22121 main.go:141] libmachine: (ha-244475-m03)     <bootmenu enable='no'/>
	I0916 10:40:10.483900   22121 main.go:141] libmachine: (ha-244475-m03)   </os>
	I0916 10:40:10.483911   22121 main.go:141] libmachine: (ha-244475-m03)   <devices>
	I0916 10:40:10.483918   22121 main.go:141] libmachine: (ha-244475-m03)     <disk type='file' device='cdrom'>
	I0916 10:40:10.483926   22121 main.go:141] libmachine: (ha-244475-m03)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/boot2docker.iso'/>
	I0916 10:40:10.483933   22121 main.go:141] libmachine: (ha-244475-m03)       <target dev='hdc' bus='scsi'/>
	I0916 10:40:10.483938   22121 main.go:141] libmachine: (ha-244475-m03)       <readonly/>
	I0916 10:40:10.483942   22121 main.go:141] libmachine: (ha-244475-m03)     </disk>
	I0916 10:40:10.483948   22121 main.go:141] libmachine: (ha-244475-m03)     <disk type='file' device='disk'>
	I0916 10:40:10.483956   22121 main.go:141] libmachine: (ha-244475-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:40:10.483963   22121 main.go:141] libmachine: (ha-244475-m03)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/ha-244475-m03.rawdisk'/>
	I0916 10:40:10.483975   22121 main.go:141] libmachine: (ha-244475-m03)       <target dev='hda' bus='virtio'/>
	I0916 10:40:10.483985   22121 main.go:141] libmachine: (ha-244475-m03)     </disk>
	I0916 10:40:10.483992   22121 main.go:141] libmachine: (ha-244475-m03)     <interface type='network'>
	I0916 10:40:10.484004   22121 main.go:141] libmachine: (ha-244475-m03)       <source network='mk-ha-244475'/>
	I0916 10:40:10.484015   22121 main.go:141] libmachine: (ha-244475-m03)       <model type='virtio'/>
	I0916 10:40:10.484023   22121 main.go:141] libmachine: (ha-244475-m03)     </interface>
	I0916 10:40:10.484028   22121 main.go:141] libmachine: (ha-244475-m03)     <interface type='network'>
	I0916 10:40:10.484035   22121 main.go:141] libmachine: (ha-244475-m03)       <source network='default'/>
	I0916 10:40:10.484040   22121 main.go:141] libmachine: (ha-244475-m03)       <model type='virtio'/>
	I0916 10:40:10.484046   22121 main.go:141] libmachine: (ha-244475-m03)     </interface>
	I0916 10:40:10.484052   22121 main.go:141] libmachine: (ha-244475-m03)     <serial type='pty'>
	I0916 10:40:10.484059   22121 main.go:141] libmachine: (ha-244475-m03)       <target port='0'/>
	I0916 10:40:10.484063   22121 main.go:141] libmachine: (ha-244475-m03)     </serial>
	I0916 10:40:10.484072   22121 main.go:141] libmachine: (ha-244475-m03)     <console type='pty'>
	I0916 10:40:10.484087   22121 main.go:141] libmachine: (ha-244475-m03)       <target type='serial' port='0'/>
	I0916 10:40:10.484099   22121 main.go:141] libmachine: (ha-244475-m03)     </console>
	I0916 10:40:10.484108   22121 main.go:141] libmachine: (ha-244475-m03)     <rng model='virtio'>
	I0916 10:40:10.484116   22121 main.go:141] libmachine: (ha-244475-m03)       <backend model='random'>/dev/random</backend>
	I0916 10:40:10.484122   22121 main.go:141] libmachine: (ha-244475-m03)     </rng>
	I0916 10:40:10.484126   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.484132   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.484137   22121 main.go:141] libmachine: (ha-244475-m03)   </devices>
	I0916 10:40:10.484143   22121 main.go:141] libmachine: (ha-244475-m03) </domain>
	I0916 10:40:10.484163   22121 main.go:141] libmachine: (ha-244475-m03) 
	I0916 10:40:10.491278   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:3c:e8:d0 in network default
	I0916 10:40:10.491751   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring networks are active...
	I0916 10:40:10.491768   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:10.492390   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring network default is active
	I0916 10:40:10.492675   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring network mk-ha-244475 is active
	I0916 10:40:10.493062   22121 main.go:141] libmachine: (ha-244475-m03) Getting domain xml...
	I0916 10:40:10.493756   22121 main.go:141] libmachine: (ha-244475-m03) Creating domain...
	I0916 10:40:11.721484   22121 main.go:141] libmachine: (ha-244475-m03) Waiting to get IP...
	I0916 10:40:11.722386   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:11.722825   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:11.722864   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:11.722811   22888 retry.go:31] will retry after 192.331481ms: waiting for machine to come up
	I0916 10:40:11.917419   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:11.917971   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:11.918005   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:11.917942   22888 retry.go:31] will retry after 286.90636ms: waiting for machine to come up
	I0916 10:40:12.206353   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:12.206819   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:12.206842   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:12.206741   22888 retry.go:31] will retry after 454.064197ms: waiting for machine to come up
	I0916 10:40:12.662050   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:12.662526   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:12.662551   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:12.662476   22888 retry.go:31] will retry after 438.548468ms: waiting for machine to come up
	I0916 10:40:13.103062   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:13.103558   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:13.103595   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:13.103500   22888 retry.go:31] will retry after 487.216711ms: waiting for machine to come up
	I0916 10:40:13.592041   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:13.592483   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:13.592504   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:13.592433   22888 retry.go:31] will retry after 609.860378ms: waiting for machine to come up
	I0916 10:40:14.204217   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:14.204729   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:14.204756   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:14.204687   22888 retry.go:31] will retry after 1.08416226s: waiting for machine to come up
	I0916 10:40:15.290010   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:15.290367   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:15.290395   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:15.290306   22888 retry.go:31] will retry after 1.14272633s: waiting for machine to come up
	I0916 10:40:16.434131   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:16.434447   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:16.434482   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:16.434408   22888 retry.go:31] will retry after 1.591492555s: waiting for machine to come up
	I0916 10:40:18.027328   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:18.027798   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:18.027827   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:18.027750   22888 retry.go:31] will retry after 1.626003631s: waiting for machine to come up
	I0916 10:40:19.655097   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:19.655517   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:19.655538   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:19.655472   22888 retry.go:31] will retry after 2.828805673s: waiting for machine to come up
	I0916 10:40:22.487722   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:22.488228   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:22.488249   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:22.488180   22888 retry.go:31] will retry after 2.947934423s: waiting for machine to come up
	I0916 10:40:25.437771   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:25.438163   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:25.438187   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:25.438126   22888 retry.go:31] will retry after 4.191813461s: waiting for machine to come up
	I0916 10:40:29.634188   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:29.634591   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:29.634611   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:29.634550   22888 retry.go:31] will retry after 4.912264836s: waiting for machine to come up
	I0916 10:40:34.550076   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.550468   22121 main.go:141] libmachine: (ha-244475-m03) Found IP for machine: 192.168.39.127
	I0916 10:40:34.550500   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.550516   22121 main.go:141] libmachine: (ha-244475-m03) Reserving static IP address...
	I0916 10:40:34.550823   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find host DHCP lease matching {name: "ha-244475-m03", mac: "52:54:00:e0:15:60", ip: "192.168.39.127"} in network mk-ha-244475
	I0916 10:40:34.624068   22121 main.go:141] libmachine: (ha-244475-m03) Reserved static IP address: 192.168.39.127
	I0916 10:40:34.624092   22121 main.go:141] libmachine: (ha-244475-m03) Waiting for SSH to be available...
	I0916 10:40:34.624101   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Getting to WaitForSSH function...
	I0916 10:40:34.626630   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.627078   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.627178   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.627199   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using SSH client type: external
	I0916 10:40:34.627216   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa (-rw-------)
	I0916 10:40:34.627249   22121 main.go:141] libmachine: (ha-244475-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:40:34.627256   22121 main.go:141] libmachine: (ha-244475-m03) DBG | About to run SSH command:
	I0916 10:40:34.627270   22121 main.go:141] libmachine: (ha-244475-m03) DBG | exit 0
	I0916 10:40:34.749330   22121 main.go:141] libmachine: (ha-244475-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 10:40:34.749611   22121 main.go:141] libmachine: (ha-244475-m03) KVM machine creation complete!
	I0916 10:40:34.749933   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:34.750501   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:34.750684   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:34.750811   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:40:34.750833   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:40:34.752727   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:40:34.752744   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:40:34.752751   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:40:34.752759   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.755291   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.755682   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.755717   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.755865   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.756023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.756183   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.756327   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.756485   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.756665   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.756675   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:40:34.856271   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:34.856293   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:40:34.856300   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.859855   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.860190   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.860221   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.860431   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.860594   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.860766   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.860894   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.861049   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.861260   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.861271   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:40:34.970117   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:40:34.970189   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:40:34.970202   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:40:34.970213   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:34.970470   22121 buildroot.go:166] provisioning hostname "ha-244475-m03"
	I0916 10:40:34.970497   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:34.970663   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.973291   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.973662   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.973691   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.973816   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.973997   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.974137   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.974267   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.974444   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.974644   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.974660   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475-m03 && echo "ha-244475-m03" | sudo tee /etc/hostname
	I0916 10:40:35.095518   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475-m03
	
	I0916 10:40:35.095558   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.098544   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.098924   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.098964   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.099171   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.099391   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.099555   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.099700   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.099862   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.100037   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.100059   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:40:35.210957   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:35.210985   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:40:35.211006   22121 buildroot.go:174] setting up certificates
	I0916 10:40:35.211018   22121 provision.go:84] configureAuth start
	I0916 10:40:35.211028   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:35.211274   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:35.213869   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.214151   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.214179   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.214333   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.216656   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.217068   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.217094   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.217230   22121 provision.go:143] copyHostCerts
	I0916 10:40:35.217262   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:40:35.217292   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:40:35.217301   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:40:35.217370   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:40:35.217472   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:40:35.217491   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:40:35.217498   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:40:35.217524   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:40:35.217564   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:40:35.217581   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:40:35.217587   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:40:35.217606   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:40:35.217660   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475-m03 san=[127.0.0.1 192.168.39.127 ha-244475-m03 localhost minikube]
	I0916 10:40:35.412945   22121 provision.go:177] copyRemoteCerts
	I0916 10:40:35.412999   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:40:35.413023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.415370   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.415731   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.415761   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.415904   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.416091   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.416250   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.416351   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:35.501393   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:40:35.501489   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:40:35.529014   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:40:35.529098   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:40:35.555006   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:40:35.555088   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:40:35.580082   22121 provision.go:87] duration metric: took 369.052998ms to configureAuth
	I0916 10:40:35.580114   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:40:35.580375   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:35.580459   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.582981   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.583302   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.583338   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.583522   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.583678   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.583829   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.583953   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.584080   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.584280   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.584295   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:40:35.804379   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:40:35.804403   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:40:35.804410   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetURL
	I0916 10:40:35.805786   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using libvirt version 6000000
	I0916 10:40:35.807818   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.808192   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.808220   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.808371   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:40:35.808384   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:40:35.808390   22121 client.go:171] duration metric: took 26.005363468s to LocalClient.Create
	I0916 10:40:35.808410   22121 start.go:167] duration metric: took 26.005420857s to libmachine.API.Create "ha-244475"
	I0916 10:40:35.808417   22121 start.go:293] postStartSetup for "ha-244475-m03" (driver="kvm2")
	I0916 10:40:35.808441   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:40:35.808457   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:35.808682   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:40:35.808703   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.810634   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.810894   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.810919   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.811023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.811207   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.811350   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.811483   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:35.891724   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:40:35.896159   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:40:35.896180   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:40:35.896236   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:40:35.896302   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:40:35.896311   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:40:35.896394   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:40:35.906252   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:40:35.931184   22121 start.go:296] duration metric: took 122.750991ms for postStartSetup
	I0916 10:40:35.931237   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:35.931826   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:35.934282   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.934635   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.934663   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.934920   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:35.935111   22121 start.go:128] duration metric: took 26.150558333s to createHost
	I0916 10:40:35.935133   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.937290   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.937626   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.937654   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.937784   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.937961   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.938124   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.938226   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.938360   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.938514   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.938523   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:40:36.038169   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483236.017253853
	
	I0916 10:40:36.038199   22121 fix.go:216] guest clock: 1726483236.017253853
	I0916 10:40:36.038211   22121 fix.go:229] Guest: 2024-09-16 10:40:36.017253853 +0000 UTC Remote: 2024-09-16 10:40:35.935121788 +0000 UTC m=+143.767887540 (delta=82.132065ms)
	I0916 10:40:36.038234   22121 fix.go:200] guest clock delta is within tolerance: 82.132065ms
	I0916 10:40:36.038242   22121 start.go:83] releasing machines lock for "ha-244475-m03", held for 26.253815031s
	I0916 10:40:36.038269   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.038526   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:36.041199   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.041528   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.041557   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.043873   22121 out.go:177] * Found network options:
	I0916 10:40:36.045262   22121 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.222
	W0916 10:40:36.046405   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:40:36.046427   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:40:36.046443   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.046990   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.047176   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.047272   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:40:36.047304   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	W0916 10:40:36.047328   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:40:36.047347   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:40:36.047416   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:40:36.047437   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:36.049999   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050208   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050428   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.050455   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050554   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:36.050601   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.050626   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050708   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:36.050785   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:36.050860   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:36.050941   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:36.051014   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:36.051036   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:36.051131   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:36.283731   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:40:36.291646   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:40:36.291714   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:40:36.309353   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:40:36.309377   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:40:36.309434   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:40:36.327071   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:40:36.341542   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:40:36.341601   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:40:36.355583   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:40:36.369888   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:40:36.493273   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:40:36.643904   22121 docker.go:233] disabling docker service ...
	I0916 10:40:36.643965   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:40:36.658738   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:40:36.672641   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:40:36.816431   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:40:36.933082   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:40:36.949104   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:40:36.970988   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:40:36.971047   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:36.982120   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:40:36.982182   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:36.993929   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.005695   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.018804   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:40:37.031297   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.042548   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.060622   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.071900   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:40:37.082293   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:40:37.082349   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:40:37.096317   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:40:37.107422   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:40:37.228410   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:40:37.320979   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:40:37.321071   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:40:37.326439   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:40:37.326501   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:40:37.330626   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:40:37.369842   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:40:37.369916   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:40:37.402403   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:40:37.437976   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:40:37.439411   22121 out.go:177]   - env NO_PROXY=192.168.39.19
	I0916 10:40:37.440926   22121 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.222
	I0916 10:40:37.442203   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:37.444743   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:37.445187   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:37.445214   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:37.445428   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:40:37.449788   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:40:37.464525   22121 mustload.go:65] Loading cluster: ha-244475
	I0916 10:40:37.464778   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:37.465171   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:37.465220   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:37.480904   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0916 10:40:37.481370   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:37.481925   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:37.481949   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:37.482292   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:37.482464   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:40:37.484020   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:40:37.484287   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:37.484324   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:37.498953   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I0916 10:40:37.499388   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:37.499929   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:37.499955   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:37.500321   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:37.500505   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:40:37.500708   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.127
	I0916 10:40:37.500720   22121 certs.go:194] generating shared ca certs ...
	I0916 10:40:37.500740   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.500875   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:40:37.500929   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:40:37.500943   22121 certs.go:256] generating profile certs ...
	I0916 10:40:37.501030   22121 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:40:37.501062   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b
	I0916 10:40:37.501082   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.127 192.168.39.254]
	I0916 10:40:37.647069   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b ...
	I0916 10:40:37.647103   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b: {Name:mkbb6bf2be5e587ad1e2fe147b3983eed0461a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.647322   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b ...
	I0916 10:40:37.647347   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b: {Name:mk98dd7442f0dc4e7003471cb55a0345916f7a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.647450   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:40:37.647652   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:40:37.647850   22121 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:40:37.647872   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:40:37.647891   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:40:37.647911   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:40:37.647929   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:40:37.647946   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:40:37.647963   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:40:37.647981   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:40:37.647998   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:40:37.648062   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:40:37.648100   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:40:37.648112   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:40:37.648144   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:40:37.648175   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:40:37.648204   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:40:37.648262   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:40:37.648302   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:40:37.648320   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:37.648380   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:40:37.648422   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:40:37.651389   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:37.651840   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:40:37.651860   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:37.652040   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:40:37.652216   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:40:37.652315   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:40:37.652394   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:40:37.729506   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:40:37.734982   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:40:37.746820   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:40:37.751379   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 10:40:37.763059   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:40:37.767743   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:40:37.780679   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:40:37.785070   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:40:37.796662   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:40:37.801157   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:40:37.812496   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:40:37.817564   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:40:37.829016   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:40:37.857371   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:40:37.883089   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:40:37.908995   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:40:37.935029   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:40:37.960446   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:40:37.986136   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:40:38.012431   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:40:38.047057   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:40:38.075002   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:40:38.101902   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:40:38.129296   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:40:38.148327   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 10:40:38.165421   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:40:38.182509   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:40:38.200200   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:40:38.216843   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:40:38.233538   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:40:38.250144   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:40:38.256117   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:40:38.267112   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.271742   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.271789   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.277670   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:40:38.288768   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:40:38.299987   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.304531   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.304588   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.310343   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:40:38.321868   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:40:38.333013   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.337929   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.337983   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.343812   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:40:38.354695   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:40:38.358776   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:40:38.358821   22121 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.31.1 crio true true} ...
	I0916 10:40:38.358893   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:40:38.358916   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:40:38.358947   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:40:38.376976   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:40:38.377036   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:40:38.377091   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:40:38.386658   22121 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:40:38.386709   22121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:40:38.397169   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:40:38.397180   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:40:38.397205   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:40:38.397221   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:38.397225   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:40:38.397245   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:40:38.397272   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:40:38.397322   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:40:38.414712   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:40:38.414816   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:40:38.414828   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 10:40:38.414843   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 10:40:38.414851   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:40:38.414867   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:40:38.425835   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 10:40:38.425882   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:40:39.292544   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:40:39.302520   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:40:39.321739   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:40:39.339714   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:40:39.356647   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:40:39.360860   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:40:39.373051   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:40:39.503177   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:40:39.521517   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:40:39.521933   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:39.521999   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:39.539241   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0916 10:40:39.539779   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:39.540277   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:39.540296   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:39.540592   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:39.540793   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:40:39.540980   22121 start.go:317] joinCluster: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:39.541103   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:40:39.541140   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:40:39.544084   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:39.544467   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:40:39.544489   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:39.544609   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:40:39.544797   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:40:39.544947   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:40:39.545069   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:40:39.712936   22121 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:40:39.712986   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4c794a.yzkn6fbxc862odl2 --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0916 10:41:02.405074   22121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4c794a.yzkn6fbxc862odl2 --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (22.692059229s)
	I0916 10:41:02.405117   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:41:02.989273   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475-m03 minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=false
	I0916 10:41:03.155780   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-244475-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:41:03.294611   22121 start.go:319] duration metric: took 23.75362709s to joinCluster
	I0916 10:41:03.294689   22121 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:41:03.295014   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:03.296058   22121 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:03.297444   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:03.509480   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:03.527697   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:41:03.527973   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:41:03.528069   22121 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I0916 10:41:03.528297   22121 node_ready.go:35] waiting up to 6m0s for node "ha-244475-m03" to be "Ready" ...
	I0916 10:41:03.528381   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:03.528392   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:03.528403   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:03.528409   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:03.535009   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:04.028547   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:04.028568   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:04.028577   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:04.028590   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:04.032000   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:04.528593   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:04.528621   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:04.528632   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:04.528639   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:04.531853   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:05.028474   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:05.028495   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:05.028507   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:05.028510   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:05.031970   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:05.529004   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:05.529030   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:05.529040   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:05.529046   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:05.534346   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:05.535149   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:06.028524   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:06.028552   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:06.028563   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:06.028568   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:06.031926   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:06.529358   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:06.529383   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:06.529396   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:06.529402   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:06.535725   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:07.028522   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:07.028543   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:07.028551   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:07.028557   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:07.032906   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:07.529385   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:07.529413   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:07.529425   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:07.529431   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:07.535794   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:07.536408   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:08.029514   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:08.029549   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:08.029561   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:08.029567   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:08.032852   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:08.528497   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:08.528520   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:08.528529   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:08.528535   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:08.532921   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:09.028942   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:09.028962   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:09.028969   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:09.028972   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:09.032474   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:09.528551   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:09.528576   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:09.528586   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:09.528591   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:09.532995   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:10.028544   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:10.028577   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:10.028584   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:10.028588   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:10.032079   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:10.032575   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:10.528902   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:10.528926   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:10.528934   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:10.528938   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:10.535638   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:11.028651   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:11.028672   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:11.028679   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:11.028682   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:11.032105   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:11.529486   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:11.529515   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:11.529526   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:11.529531   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:11.535563   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:12.029412   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:12.029432   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:12.029440   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:12.029444   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:12.033149   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:12.033738   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:12.528711   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:12.528733   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:12.528742   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:12.528746   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:12.534586   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:13.029512   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:13.029536   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:13.029547   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:13.029553   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:13.033681   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:13.529522   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:13.529548   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:13.529559   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:13.529566   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:13.533930   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:14.029172   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:14.029194   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:14.029202   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:14.029206   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:14.032272   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:14.529072   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:14.529094   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:14.529102   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:14.529107   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:14.535318   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:14.535890   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:15.029077   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:15.029101   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:15.029113   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:15.029122   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:15.032652   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:15.528843   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:15.528869   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:15.528876   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:15.528883   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:15.533117   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:16.028968   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:16.028990   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:16.028998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:16.029002   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:16.032289   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:16.528776   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:16.528800   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:16.528812   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:16.528820   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:16.532317   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:17.029247   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:17.029273   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:17.029283   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:17.029289   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:17.032437   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:17.032978   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:17.528914   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:17.528940   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:17.528951   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:17.528957   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:17.535109   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:18.028865   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:18.028886   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:18.028894   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:18.028897   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:18.032181   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:18.529133   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:18.529160   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:18.529172   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:18.529177   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:18.532540   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:19.028551   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:19.028571   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:19.028579   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:19.028584   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:19.031968   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:19.529456   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:19.529479   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:19.529487   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:19.529492   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:19.535044   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:19.535889   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:20.029083   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.029103   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.029111   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.029114   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.032351   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.529324   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.529353   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.529370   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.529376   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.532351   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.532942   22121 node_ready.go:49] node "ha-244475-m03" has status "Ready":"True"
	I0916 10:41:20.532967   22121 node_ready.go:38] duration metric: took 17.004653976s for node "ha-244475-m03" to be "Ready" ...
	I0916 10:41:20.532978   22121 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:20.533057   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:20.533074   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.533084   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.533092   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.541611   22121 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:41:20.549215   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.549300   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lzrg2
	I0916 10:41:20.549309   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.549316   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.549321   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.551990   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.552792   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.552807   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.552814   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.552819   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.555246   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.556034   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.556051   22121 pod_ready.go:82] duration metric: took 6.810223ms for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.556059   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.556109   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-m8fd7
	I0916 10:41:20.556118   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.556124   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.556129   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.558530   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.559188   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.559202   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.559209   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.559212   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.561354   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.561890   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.561910   22121 pod_ready.go:82] duration metric: took 5.84501ms for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.561921   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.561982   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475
	I0916 10:41:20.561993   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.561999   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.562003   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.564349   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.565030   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.565042   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.565047   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.565051   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.567656   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.568101   22121 pod_ready.go:93] pod "etcd-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.568115   22121 pod_ready.go:82] duration metric: took 6.18818ms for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.568126   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.568174   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m02
	I0916 10:41:20.568183   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.568191   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.568196   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.571051   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.572108   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:20.572122   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.572131   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.572136   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.574514   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.574938   22121 pod_ready.go:93] pod "etcd-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.574958   22121 pod_ready.go:82] duration metric: took 6.825238ms for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.574968   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.730339   22121 request.go:632] Waited for 155.28324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m03
	I0916 10:41:20.730409   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m03
	I0916 10:41:20.730416   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.730426   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.730434   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.733792   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.929868   22121 request.go:632] Waited for 195.353662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.929934   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.929941   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.929951   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.929956   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.933157   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.933861   22121 pod_ready.go:93] pod "etcd-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.933879   22121 pod_ready.go:82] duration metric: took 358.903224ms for pod "etcd-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.933899   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.130218   22121 request.go:632] Waited for 196.250965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:41:21.130279   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:41:21.130287   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.130297   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.130307   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.133197   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:21.330203   22121 request.go:632] Waited for 196.304187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:21.330250   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:21.330254   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.330262   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.330265   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.333309   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:21.333928   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:21.333946   22121 pod_ready.go:82] duration metric: took 400.041237ms for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.333957   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.530002   22121 request.go:632] Waited for 195.934393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:41:21.530071   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:41:21.530079   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.530089   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.530097   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.540600   22121 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:41:21.729634   22121 request.go:632] Waited for 188.35156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:21.729700   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:21.729712   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.729727   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.729736   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.733214   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:21.733789   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:21.733804   22121 pod_ready.go:82] duration metric: took 399.837781ms for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.733813   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.930001   22121 request.go:632] Waited for 196.125954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m03
	I0916 10:41:21.930071   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m03
	I0916 10:41:21.930080   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.930088   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.930093   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.933477   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.129642   22121 request.go:632] Waited for 195.348961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:22.129729   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:22.129740   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.129750   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.129758   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.137037   22121 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:41:22.137643   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.137664   22121 pod_ready.go:82] duration metric: took 403.843897ms for pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.137678   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.329532   22121 request.go:632] Waited for 191.776666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:41:22.329621   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:41:22.329633   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.329640   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.329645   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.333345   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.530006   22121 request.go:632] Waited for 195.956457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:22.530079   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:22.530085   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.530093   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.530101   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.533113   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:22.533700   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.533718   22121 pod_ready.go:82] duration metric: took 396.032752ms for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.533728   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.729791   22121 request.go:632] Waited for 195.998005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:41:22.729857   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:41:22.729864   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.729874   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.729910   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.734399   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:22.929502   22121 request.go:632] Waited for 194.264694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:22.929574   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:22.929582   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.929591   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.929595   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.932871   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.934055   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.934073   22121 pod_ready.go:82] duration metric: took 400.337784ms for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.934082   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.130261   22121 request.go:632] Waited for 196.120217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m03
	I0916 10:41:23.130357   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m03
	I0916 10:41:23.130367   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.130375   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.130380   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.134472   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:23.329661   22121 request.go:632] Waited for 194.357343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:23.329723   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:23.329733   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.329747   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.329754   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.333236   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:23.333984   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:23.334009   22121 pod_ready.go:82] duration metric: took 399.919835ms for pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.334026   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.530101   22121 request.go:632] Waited for 195.996765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:41:23.530191   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:41:23.530198   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.530208   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.530219   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.535501   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:23.729541   22121 request.go:632] Waited for 193.385559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:23.729601   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:23.729606   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.729613   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.729627   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.733179   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:23.733969   22121 pod_ready.go:93] pod "kube-proxy-crttt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:23.733986   22121 pod_ready.go:82] duration metric: took 399.951283ms for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.733995   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5v5l" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.929754   22121 request.go:632] Waited for 195.67228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5v5l
	I0916 10:41:23.929814   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5v5l
	I0916 10:41:23.929819   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.929826   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.929831   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.933527   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.129706   22121 request.go:632] Waited for 195.381059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:24.129770   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:24.129776   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.129786   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.129794   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.133530   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.134153   22121 pod_ready.go:93] pod "kube-proxy-g5v5l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.134171   22121 pod_ready.go:82] duration metric: took 400.17004ms for pod "kube-proxy-g5v5l" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.134180   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.330300   22121 request.go:632] Waited for 196.037638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:41:24.330367   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:41:24.330373   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.330384   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.330391   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.334038   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.530069   22121 request.go:632] Waited for 195.337849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:24.530145   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:24.530153   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.530160   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.530165   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.536414   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:24.536846   22121 pod_ready.go:93] pod "kube-proxy-t454b" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.536864   22121 pod_ready.go:82] duration metric: took 402.676992ms for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.536876   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.730273   22121 request.go:632] Waited for 193.335182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:41:24.730344   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:41:24.730349   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.730357   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.730365   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.733832   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.930161   22121 request.go:632] Waited for 195.330427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:24.930225   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:24.930241   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.930250   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.930259   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.933553   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.934318   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.934335   22121 pod_ready.go:82] duration metric: took 397.451613ms for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.934344   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.129510   22121 request.go:632] Waited for 195.10302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:41:25.129579   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:41:25.129587   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.129595   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.129600   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.133734   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:25.329835   22121 request.go:632] Waited for 195.396951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:25.329904   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:25.329912   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.329922   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.329928   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.333482   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:25.334323   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:25.334342   22121 pod_ready.go:82] duration metric: took 399.990647ms for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.334355   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.529377   22121 request.go:632] Waited for 194.946933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m03
	I0916 10:41:25.529470   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m03
	I0916 10:41:25.529482   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.529493   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.529501   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.534845   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:25.729925   22121 request.go:632] Waited for 194.359506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:25.729987   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:25.729993   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.730000   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.730005   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.733288   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:25.734036   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:25.734056   22121 pod_ready.go:82] duration metric: took 399.693479ms for pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.734069   22121 pod_ready.go:39] duration metric: took 5.201079342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:25.734086   22121 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:41:25.734140   22121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:25.749396   22121 api_server.go:72] duration metric: took 22.454672004s to wait for apiserver process to appear ...
	I0916 10:41:25.749425   22121 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:41:25.749447   22121 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0916 10:41:25.753676   22121 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0916 10:41:25.753738   22121 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I0916 10:41:25.753749   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.753760   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.753769   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.755474   22121 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:25.755537   22121 api_server.go:141] control plane version: v1.31.1
	I0916 10:41:25.755552   22121 api_server.go:131] duration metric: took 6.119804ms to wait for apiserver health ...
	I0916 10:41:25.755561   22121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:41:25.929957   22121 request.go:632] Waited for 174.326859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:25.930008   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:25.930013   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.930020   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.930029   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.936785   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:25.943643   22121 system_pods.go:59] 24 kube-system pods found
	I0916 10:41:25.943669   22121 system_pods.go:61] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:41:25.943674   22121 system_pods.go:61] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:41:25.943678   22121 system_pods.go:61] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:41:25.943682   22121 system_pods.go:61] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:41:25.943685   22121 system_pods.go:61] "etcd-ha-244475-m03" [e741d8c7-f12c-4fa1-b3cc-582043ca312d] Running
	I0916 10:41:25.943688   22121 system_pods.go:61] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:41:25.943691   22121 system_pods.go:61] "kindnet-rzwwj" [ffe109a7-d477-4b8a-ab62-4e4ceec1b4ed] Running
	I0916 10:41:25.943695   22121 system_pods.go:61] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:41:25.943698   22121 system_pods.go:61] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:41:25.943701   22121 system_pods.go:61] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:41:25.943704   22121 system_pods.go:61] "kube-apiserver-ha-244475-m03" [469c5743-509f-4c1c-b46e-fa3e6e79a673] Running
	I0916 10:41:25.943707   22121 system_pods.go:61] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:41:25.943710   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:41:25.943713   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m03" [1054e7df-9598-41de-a412-f18d3ffff1cb] Running
	I0916 10:41:25.943716   22121 system_pods.go:61] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:41:25.943719   22121 system_pods.go:61] "kube-proxy-g5v5l" [102f8d6f-4cb4-4c59-ae99-acccabb9fb8e] Running
	I0916 10:41:25.943723   22121 system_pods.go:61] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:41:25.943726   22121 system_pods.go:61] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:41:25.943729   22121 system_pods.go:61] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:41:25.943731   22121 system_pods.go:61] "kube-scheduler-ha-244475-m03" [90b5bffb-165c-4620-b90a-e9f1d3f4c323] Running
	I0916 10:41:25.943734   22121 system_pods.go:61] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:41:25.943737   22121 system_pods.go:61] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:41:25.943740   22121 system_pods.go:61] "kube-vip-ha-244475-m03" [b507cf83-f056-4ab3-b276-4f477ee77747] Running
	I0916 10:41:25.943743   22121 system_pods.go:61] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:41:25.943748   22121 system_pods.go:74] duration metric: took 188.180661ms to wait for pod list to return data ...
	I0916 10:41:25.943758   22121 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:41:26.130184   22121 request.go:632] Waited for 186.361022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:26.130240   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:26.130247   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.130256   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.130263   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.136218   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:26.136355   22121 default_sa.go:45] found service account: "default"
	I0916 10:41:26.136373   22121 default_sa.go:55] duration metric: took 192.608031ms for default service account to be created ...
	I0916 10:41:26.136384   22121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:41:26.329960   22121 request.go:632] Waited for 193.503475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:26.330035   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:26.330046   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.330056   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.330062   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.336265   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:26.343431   22121 system_pods.go:86] 24 kube-system pods found
	I0916 10:41:26.343459   22121 system_pods.go:89] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:41:26.343464   22121 system_pods.go:89] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:41:26.343468   22121 system_pods.go:89] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:41:26.343471   22121 system_pods.go:89] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:41:26.343474   22121 system_pods.go:89] "etcd-ha-244475-m03" [e741d8c7-f12c-4fa1-b3cc-582043ca312d] Running
	I0916 10:41:26.343477   22121 system_pods.go:89] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:41:26.343481   22121 system_pods.go:89] "kindnet-rzwwj" [ffe109a7-d477-4b8a-ab62-4e4ceec1b4ed] Running
	I0916 10:41:26.343485   22121 system_pods.go:89] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:41:26.343490   22121 system_pods.go:89] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:41:26.343495   22121 system_pods.go:89] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:41:26.343501   22121 system_pods.go:89] "kube-apiserver-ha-244475-m03" [469c5743-509f-4c1c-b46e-fa3e6e79a673] Running
	I0916 10:41:26.343509   22121 system_pods.go:89] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:41:26.343515   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:41:26.343524   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m03" [1054e7df-9598-41de-a412-f18d3ffff1cb] Running
	I0916 10:41:26.343530   22121 system_pods.go:89] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:41:26.343536   22121 system_pods.go:89] "kube-proxy-g5v5l" [102f8d6f-4cb4-4c59-ae99-acccabb9fb8e] Running
	I0916 10:41:26.343548   22121 system_pods.go:89] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:41:26.343554   22121 system_pods.go:89] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:41:26.343558   22121 system_pods.go:89] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:41:26.343563   22121 system_pods.go:89] "kube-scheduler-ha-244475-m03" [90b5bffb-165c-4620-b90a-e9f1d3f4c323] Running
	I0916 10:41:26.343567   22121 system_pods.go:89] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:41:26.343570   22121 system_pods.go:89] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:41:26.343573   22121 system_pods.go:89] "kube-vip-ha-244475-m03" [b507cf83-f056-4ab3-b276-4f477ee77747] Running
	I0916 10:41:26.343578   22121 system_pods.go:89] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:41:26.343589   22121 system_pods.go:126] duration metric: took 207.195971ms to wait for k8s-apps to be running ...
	I0916 10:41:26.343599   22121 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:41:26.343650   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:41:26.359495   22121 system_svc.go:56] duration metric: took 15.88709ms WaitForService to wait for kubelet
	I0916 10:41:26.359526   22121 kubeadm.go:582] duration metric: took 23.064804714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:26.359547   22121 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:41:26.529951   22121 request.go:632] Waited for 170.330403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I0916 10:41:26.530026   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I0916 10:41:26.530033   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.530043   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.530050   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.536030   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:26.537495   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537520   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537534   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537539   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537545   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537549   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537554   22121 node_conditions.go:105] duration metric: took 178.001679ms to run NodePressure ...
	I0916 10:41:26.537572   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:41:26.537599   22121 start.go:255] writing updated cluster config ...
	I0916 10:41:26.538305   22121 ssh_runner.go:195] Run: rm -f paused
	I0916 10:41:26.548959   22121 out.go:177] * Done! kubectl is now configured to use "ha-244475" cluster and "default" namespace by default
	E0916 10:41:26.550066   22121 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.724044821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff58739c-d682-45be-9dd6-bceb7f2c0510 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.724381937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff58739c-d682-45be-9dd6-bceb7f2c0510 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.765139699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4153cf80-49d3-416c-9f9c-c441a8c67167 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.765212873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4153cf80-49d3-416c-9f9c-c441a8c67167 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.766676402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a02933f6-b8c0-40a2-ac42-380501efe483 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.767131385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483346767107635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a02933f6-b8c0-40a2-ac42-380501efe483 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.767749933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7eec6a6-3b06-4ac1-b70d-89a29fe84950 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.767815743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7eec6a6-3b06-4ac1-b70d-89a29fe84950 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.768033130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7eec6a6-3b06-4ac1-b70d-89a29fe84950 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.812080235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7476bd2-60b8-4fc7-a740-964d400f173c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.812194166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7476bd2-60b8-4fc7-a740-964d400f173c name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.813747360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ddd3c41-28bb-414c-94f2-bc8adb9e6bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.814892289Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=5fe0961e-88c8-453a-a991-5a4176b4f222 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.814974200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fe0961e-88c8-453a-a991-5a4176b4f222 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.815609166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483346815586920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ddd3c41-28bb-414c-94f2-bc8adb9e6bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.816333823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a10941c1-0a64-48aa-a9ca-2e005e690758 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.816405783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a10941c1-0a64-48aa-a9ca-2e005e690758 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.816779865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a10941c1-0a64-48aa-a9ca-2e005e690758 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.858895361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f127520-aa53-4bf8-b5a8-bc9fe78b05ac name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.858970138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f127520-aa53-4bf8-b5a8-bc9fe78b05ac name=/runtime.v1.RuntimeService/Version
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.860409978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29116505-2d4b-44d7-8231-93754727d5d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.861005618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483346860977780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29116505-2d4b-44d7-8231-93754727d5d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.861700646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcde7414-66bd-4496-9dff-b279d6f5756f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.861770561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcde7414-66bd-4496-9dff-b279d6f5756f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:42:26 ha-244475 crio[667]: time="2024-09-16 10:42:26.862048794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcde7414-66bd-4496-9dff-b279d6f5756f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c701fcd74aba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   57 seconds ago      Running             busybox                   0                   ed1838f7506b4       busybox-7dff88458-d4m5s
	034030626ec02       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   0                   159730a21bea6       coredns-7c65d6cfc9-m8fd7
	7f78c5e4a3a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   0                   4d8c4f0a29bb7       coredns-7c65d6cfc9-lzrg2
	b16f64da09fae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       0                   66086953ec65f       storage-provisioner
	ac63170bf5bb3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               0                   9c8ab7a98f749       kindnet-7v2cl
	6e6d69b26d5c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago       Running             kube-proxy                0                   3fbb7c8e9af71       kube-proxy-crttt
	62c031e0ed0a9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     3 minutes ago       Running             kube-vip                  0                   f76913fe7302a       kube-vip-ha-244475
	a0223669288e2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago       Running             kube-scheduler            0                   42a76bc40dc3e       kube-scheduler-ha-244475
	13162d4bf94f7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago       Running             kube-apiserver            0                   ec0d4cf0dd9b7       kube-apiserver-ha-244475
	308650af833f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago       Running             etcd                      0                   693cfec22177d       etcd-ha-244475
	f16e87fb57b2b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago       Running             kube-controller-manager   0                   fad8ac85cdf54       kube-controller-manager-ha-244475
	
	
	==> coredns [034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3] <==
	[INFO] 10.244.2.2:43047 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.055244509s
	[INFO] 10.244.2.2:43779 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285925s
	[INFO] 10.244.2.2:49571 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000283044s
	[INFO] 10.244.2.2:57761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004222785s
	[INFO] 10.244.2.2:42931 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200783s
	[INFO] 10.244.0.4:33694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014309s
	[INFO] 10.244.0.4:35532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107639s
	[INFO] 10.244.0.4:53168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009525s
	[INFO] 10.244.0.4:50253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250965s
	[INFO] 10.244.0.4:40357 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089492s
	[INFO] 10.244.1.2:49152 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001985919s
	[INFO] 10.244.1.2:50396 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132748s
	[INFO] 10.244.2.2:38313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000951s
	[INFO] 10.244.0.4:43336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168268s
	[INFO] 10.244.0.4:44949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123895s
	[INFO] 10.244.0.4:52348 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107748s
	[INFO] 10.244.1.2:36649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286063s
	[INFO] 10.244.1.2:42747 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082265s
	[INFO] 10.244.2.2:45891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018425s
	[INFO] 10.244.2.2:53625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126302s
	[INFO] 10.244.2.2:44397 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109098s
	[INFO] 10.244.0.4:39956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013935s
	[INFO] 10.244.0.4:39139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008694s
	[INFO] 10.244.0.4:38933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060589s
	[INFO] 10.244.1.2:36849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146451s
	
	
	==> coredns [7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465] <==
	[INFO] 10.244.0.4:51676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000096142s
	[INFO] 10.244.1.2:33245 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001877876s
	[INFO] 10.244.2.2:52615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191836s
	[INFO] 10.244.2.2:49834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166519s
	[INFO] 10.244.2.2:39495 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127494s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001694487s
	[INFO] 10.244.0.4:36178 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091958s
	[INFO] 10.244.0.4:33247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160731s
	[INFO] 10.244.1.2:52512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150889s
	[INFO] 10.244.1.2:43450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182534s
	[INFO] 10.244.1.2:56403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150359s
	[INFO] 10.244.1.2:51246 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230547s
	[INFO] 10.244.1.2:39220 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090721s
	[INFO] 10.244.1.2:41766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155057s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153103s
	[INFO] 10.244.2.2:44469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099361s
	[INFO] 10.244.2.2:52465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086382s
	[INFO] 10.244.0.4:36474 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117775s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142151s
	[INFO] 10.244.1.2:39272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113629s
	[INFO] 10.244.2.2:43223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141566s
	[INFO] 10.244.0.4:36502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000282073s
	[INFO] 10.244.1.2:60302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207499s
	[INFO] 10.244.1.2:49950 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184993s
	[INFO] 10.244.1.2:54052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094371s
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:39:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m30s
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m30s
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m35s
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m30s
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     3m42s (x7 over 3m42s)  kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m35s                  kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s                  kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s                  kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal  NodeReady                3m18s                  kubelet          Node ha-244475 status is now: NodeReady
	  Normal  RegisteredNode           2m34s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal  RegisteredNode           80s                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:39:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:39:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:39:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d827e65a-7fd8-4399-b348-231b704c25ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m41s
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m43s
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m41s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           2m34s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           80s                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	
	
	Name:               ha-244475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:41:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-244475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d01912e060494092a8b6a2df64a0a30c
	  System UUID:                d01912e0-6049-4092-a8b6-a2df64a0a30c
	  Boot ID:                    1fb9da41-3fb9-4db3-bca0-b0c15d7a9875
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7bhqg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 etcd-ha-244475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         86s
	  kube-system                 kindnet-rzwwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      88s
	  kube-system                 kube-apiserver-ha-244475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-ha-244475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-g5v5l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-ha-244475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-vip-ha-244475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node ha-244475-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal  RegisteredNode           84s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal  RegisteredNode           80s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:42:19 +0000   Mon, 16 Sep 2024 10:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    4513a05d-6164-4c3b-91e3-07f7c103c2f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dflt4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28s
	  kube-system                 kube-proxy-kp7hv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  28s (x2 over 28s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x2 over 28s)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x2 over 28s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  NodeReady                8s                 kubelet          Node ha-244475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050568] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.803306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.430603] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.601752] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.139824] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054792] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058211] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173707] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.144769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.277555] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.915448] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.568561] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067639] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.970048] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3] <==
	{"level":"info","ts":"2024-09-16T10:41:01.197935Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:41:01.199389Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:41:01.199262Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:41:01.241039Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e16a89b9eb3a3bb1","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-16T10:41:01.313814Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:41:01.313973Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:41:01.324472Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:41:01.324626Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:41:02.243646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035 16242946437673532337 17357719710197446810)"}
	{"level":"info","ts":"2024-09-16T10:41:02.243935Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123"}
	{"level":"info","ts":"2024-09-16T10:41:02.244117Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"683e1d26ac7e3123","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:41:59.674635Z","caller":"traceutil/trace.go:171","msg":"trace[488574962] linearizableReadLoop","detail":"{readStateIndex:1401; appliedIndex:1402; }","duration":"147.736374ms","start":"2024-09-16T10:41:59.526873Z","end":"2024-09-16T10:41:59.674609Z","steps":["trace[488574962] 'read index received'  (duration: 147.733273ms)","trace[488574962] 'applied index is now lower than readState.Index'  (duration: 2.326µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:41:59.761235Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.288679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:41:59.762293Z","caller":"traceutil/trace.go:171","msg":"trace[2129512316] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1223; }","duration":"235.425359ms","start":"2024-09-16T10:41:59.526847Z","end":"2024-09-16T10:41:59.762272Z","steps":["trace[2129512316] 'agreement among raft nodes before linearized reading'  (duration: 148.231334ms)","trace[2129512316] 'range keys from in-memory index tree'  (duration: 86.028498ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:41:59.761640Z","caller":"traceutil/trace.go:171","msg":"trace[1309913912] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1223; }","duration":"243.752193ms","start":"2024-09-16T10:41:59.517860Z","end":"2024-09-16T10:41:59.761612Z","steps":["trace[1309913912] 'process raft request'  (duration: 157.256037ms)","trace[1309913912] 'compare'  (duration: 85.201041ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:41:59.761909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.257866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"warn","ts":"2024-09-16T10:41:59.761977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.01727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1109"}
	{"level":"info","ts":"2024-09-16T10:41:59.764051Z","caller":"traceutil/trace.go:171","msg":"trace[1829342019] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1223; }","duration":"218.081825ms","start":"2024-09-16T10:41:59.545954Z","end":"2024-09-16T10:41:59.764036Z","steps":["trace[1829342019] 'agreement among raft nodes before linearized reading'  (duration: 215.972049ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:41:59.765404Z","caller":"traceutil/trace.go:171","msg":"trace[1062361429] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1223; }","duration":"170.681736ms","start":"2024-09-16T10:41:59.593638Z","end":"2024-09-16T10:41:59.764319Z","steps":["trace[1062361429] 'agreement among raft nodes before linearized reading'  (duration: 168.187188ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:42:08.114328Z","caller":"traceutil/trace.go:171","msg":"trace[1732031508] transaction","detail":"{read_only:false; response_revision:1343; number_of_response:1; }","duration":"184.144725ms","start":"2024-09-16T10:42:07.930170Z","end":"2024-09-16T10:42:08.114314Z","steps":["trace[1732031508] 'process raft request'  (duration: 184.092912ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:42:08.114700Z","caller":"traceutil/trace.go:171","msg":"trace[636585026] transaction","detail":"{read_only:false; response_revision:1342; number_of_response:1; }","duration":"191.516055ms","start":"2024-09-16T10:42:07.923173Z","end":"2024-09-16T10:42:08.114689Z","steps":["trace[636585026] 'process raft request'  (duration: 119.776928ms)","trace[636585026] 'compare'  (duration: 70.904878ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:42:08.387951Z","caller":"traceutil/trace.go:171","msg":"trace[1409460012] linearizableReadLoop","detail":"{readStateIndex:1564; appliedIndex:1564; }","duration":"136.835319ms","start":"2024-09-16T10:42:08.251100Z","end":"2024-09-16T10:42:08.387935Z","steps":["trace[1409460012] 'read index received'  (duration: 136.829689ms)","trace[1409460012] 'applied index is now lower than readState.Index'  (duration: 3.901µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:42:08.446891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.771454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-244475-m04\" ","response":"range_response_count:1 size:2859"}
	{"level":"info","ts":"2024-09-16T10:42:08.447012Z","caller":"traceutil/trace.go:171","msg":"trace[1637847335] range","detail":"{range_begin:/registry/minions/ha-244475-m04; range_end:; response_count:1; response_revision:1343; }","duration":"195.903242ms","start":"2024-09-16T10:42:08.251094Z","end":"2024-09-16T10:42:08.446998Z","steps":["trace[1637847335] 'agreement among raft nodes before linearized reading'  (duration: 136.967679ms)","trace[1637847335] 'range keys from in-memory index tree'  (duration: 58.732689ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:42:08.447390Z","caller":"traceutil/trace.go:171","msg":"trace[997899094] transaction","detail":"{read_only:false; response_revision:1344; number_of_response:1; }","duration":"245.827449ms","start":"2024-09-16T10:42:08.201548Z","end":"2024-09-16T10:42:08.447376Z","steps":["trace[997899094] 'process raft request'  (duration: 184.925638ms)","trace[997899094] 'compare'  (duration: 60.734274ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:42:27 up 4 min,  0 users,  load average: 0.38, 0.26, 0.11
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913] <==
	I0916 10:41:49.300787       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:41:49.300818       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:41:59.300896       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:41:59.300947       1 main.go:299] handling current node
	I0916 10:41:59.300974       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:41:59.300979       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:41:59.301169       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:41:59.301395       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:42:09.300574       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:42:09.300683       1 main.go:299] handling current node
	I0916 10:42:09.300717       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:42:09.300735       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:42:09.300903       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:42:09.300932       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:42:09.301017       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:42:09.301037       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:42:09.301102       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.110 Flags: [] Table: 0} 
	I0916 10:42:19.300748       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:42:19.300869       1 main.go:299] handling current node
	I0916 10:42:19.300940       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:42:19.300975       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:42:19.301192       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:42:19.301234       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:42:19.301316       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:42:19.301344       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1] <==
	W0916 10:38:51.442192       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I0916 10:38:51.443345       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:38:51.448673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:38:51.657156       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:38:52.610073       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:38:52.629898       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:38:52.640941       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:38:57.207096       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:38:57.359795       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0916 10:39:51.439268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	E0916 10:41:30.050430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60486: use of closed network connection
	E0916 10:41:30.242968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60496: use of closed network connection
	E0916 10:41:30.422776       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60516: use of closed network connection
	E0916 10:41:30.667331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60540: use of closed network connection
	E0916 10:41:30.849977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60570: use of closed network connection
	E0916 10:41:31.026403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60598: use of closed network connection
	E0916 10:41:31.216159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60626: use of closed network connection
	E0916 10:41:31.408973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60648: use of closed network connection
	E0916 10:41:31.595323       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60664: use of closed network connection
	E0916 10:41:31.892210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33810: use of closed network connection
	E0916 10:41:32.120845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33824: use of closed network connection
	E0916 10:41:32.318310       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33836: use of closed network connection
	E0916 10:41:32.517544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33856: use of closed network connection
	E0916 10:41:32.715949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33878: use of closed network connection
	E0916 10:41:32.888744       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	
	
	==> kube-controller-manager [f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113] <==
	I0916 10:41:29.430862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.994906ms"
	I0916 10:41:29.431111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="179.435µs"
	I0916 10:41:29.669353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:41:47.948125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:41:56.176029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	E0916 10:41:59.507885       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-7hgrx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-7hgrx\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 10:41:59.771280       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-7hgrx failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-7hgrx\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 10:41:59.890078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-244475-m04\" does not exist"
	I0916 10:41:59.913033       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-244475-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:41:59.913138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:41:59.913216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:41:59.930942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:00.175642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:00.590484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:01.490254       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-244475-m04"
	I0916 10:42:01.528827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.011238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.079872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.261410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.376315       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:10.010776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:19.018320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:19.018457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:42:19.032789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:21.506056       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	
	
	==> kube-proxy [6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:38:58.381104       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:38:58.405774       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E0916 10:38:58.405958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:38:58.486128       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:38:58.486191       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:38:58.486214       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:38:58.488718       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:38:58.489862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:38:58.489894       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:38:58.500489       1 config.go:199] "Starting service config controller"
	I0916 10:38:58.500804       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:38:58.501030       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:38:58.501051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:38:58.502033       1 config.go:328] "Starting node config controller"
	I0916 10:38:58.502063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:38:58.601173       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:38:58.601274       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:38:58.602581       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb] <==
	E0916 10:38:50.527717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.585028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:38:50.585078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.611653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:38:50.611726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.650971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:38:50.651023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.696031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:38:50.696092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.761221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:38:50.761274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.985092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:38:50.985144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.991955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:38:50.992011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.039856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:38:51.039907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.293677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:38:51.293783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:38:53.269920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:27.446213       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5" pod="default/busybox-7dff88458-7bhqg" assumedNode="ha-244475-m03" currentNode="ha-244475-m02"
	E0916 10:41:27.456948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m02"
	E0916 10:41:27.457071       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5(default/busybox-7dff88458-7bhqg) was assumed on ha-244475-m02 but assigned to ha-244475-m03" pod="default/busybox-7dff88458-7bhqg"
	E0916 10:41:27.457108       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" pod="default/busybox-7dff88458-7bhqg"
	I0916 10:41:27.457173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m03"
	
	
	==> kubelet <==
	Sep 16 10:40:52 ha-244475 kubelet[1309]: E0916 10:40:52.629382    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483252629066206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:02 ha-244475 kubelet[1309]: E0916 10:41:02.632686    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483262631846325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:02 ha-244475 kubelet[1309]: E0916 10:41:02.632813    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483262631846325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:12 ha-244475 kubelet[1309]: E0916 10:41:12.633787    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483272633488742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:12 ha-244475 kubelet[1309]: E0916 10:41:12.633833    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483272633488742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:22 ha-244475 kubelet[1309]: E0916 10:41:22.636247    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483282635950498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:22 ha-244475 kubelet[1309]: E0916 10:41:22.636279    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483282635950498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:137414,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:27 ha-244475 kubelet[1309]: I0916 10:41:27.586767    1309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgk2\" (UniqueName: \"kubernetes.io/projected/6c479ead-4e77-41ca-9e2e-5cd7dc781761-kube-api-access-7hgk2\") pod \"busybox-7dff88458-d4m5s\" (UID: \"6c479ead-4e77-41ca-9e2e-5cd7dc781761\") " pod="default/busybox-7dff88458-d4m5s"
	Sep 16 10:41:32 ha-244475 kubelet[1309]: E0916 10:41:32.637612    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483292637304605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:32 ha-244475 kubelet[1309]: E0916 10:41:32.637636    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483292637304605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:42 ha-244475 kubelet[1309]: E0916 10:41:42.640281    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483302639225147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:42 ha-244475 kubelet[1309]: E0916 10:41:42.640326    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483302639225147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:52 ha-244475 kubelet[1309]: E0916 10:41:52.622091    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:41:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:41:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:41:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:41:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:41:52 ha-244475 kubelet[1309]: E0916 10:41:52.642220    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483312641989969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:41:52 ha-244475 kubelet[1309]: E0916 10:41:52.642248    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483312641989969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:42:02 ha-244475 kubelet[1309]: E0916 10:42:02.645737    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483322644870711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:42:02 ha-244475 kubelet[1309]: E0916 10:42:02.646125    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483322644870711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:42:12 ha-244475 kubelet[1309]: E0916 10:42:12.648220    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483332647639115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:42:12 ha-244475 kubelet[1309]: E0916 10:42:12.648680    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483332647639115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:42:22 ha-244475 kubelet[1309]: E0916 10:42:22.654474    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483342653192345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:42:22 ha-244475 kubelet[1309]: E0916 10:42:22.654895    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483342653192345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (534.6µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/NodeLabels (2.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 node stop m02 -v=7 --alsologtostderr
E0916 10:42:50.216161   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:44:12.138074   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.458565513s)

                                                
                                                
-- stdout --
	* Stopping node "ha-244475-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:42:41.586571   26166 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:41.586918   26166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:41.586934   26166 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:41.586938   26166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:41.587174   26166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:42:41.587469   26166 mustload.go:65] Loading cluster: ha-244475
	I0916 10:42:41.587927   26166 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:42:41.587947   26166 stop.go:39] StopHost: ha-244475-m02
	I0916 10:42:41.588318   26166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:42:41.588359   26166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:42:41.604169   26166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0916 10:42:41.604932   26166 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:42:41.605513   26166 main.go:141] libmachine: Using API Version  1
	I0916 10:42:41.605535   26166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:42:41.605862   26166 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:42:41.608108   26166 out.go:177] * Stopping node "ha-244475-m02"  ...
	I0916 10:42:41.609331   26166 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 10:42:41.609361   26166 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:42:41.609553   26166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 10:42:41.609585   26166 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:42:41.612529   26166 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:42:41.612979   26166 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:42:41.613002   26166 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:42:41.613182   26166 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:42:41.613360   26166 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:42:41.613502   26166 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:42:41.613636   26166 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:42:41.697982   26166 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 10:42:41.752245   26166 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 10:42:41.807239   26166 main.go:141] libmachine: Stopping "ha-244475-m02"...
	I0916 10:42:41.807293   26166 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:42:41.808711   26166 main.go:141] libmachine: (ha-244475-m02) Calling .Stop
	I0916 10:42:41.812296   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 0/120
	I0916 10:42:42.814341   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 1/120
	I0916 10:42:43.815545   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 2/120
	I0916 10:42:44.816654   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 3/120
	I0916 10:42:45.817864   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 4/120
	I0916 10:42:46.819712   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 5/120
	I0916 10:42:47.821034   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 6/120
	I0916 10:42:48.822591   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 7/120
	I0916 10:42:49.823809   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 8/120
	I0916 10:42:50.825039   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 9/120
	I0916 10:42:51.826474   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 10/120
	I0916 10:42:52.828049   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 11/120
	I0916 10:42:53.829459   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 12/120
	I0916 10:42:54.830825   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 13/120
	I0916 10:42:55.832708   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 14/120
	I0916 10:42:56.834720   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 15/120
	I0916 10:42:57.835902   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 16/120
	I0916 10:42:58.837277   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 17/120
	I0916 10:42:59.838568   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 18/120
	I0916 10:43:00.839907   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 19/120
	I0916 10:43:01.842125   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 20/120
	I0916 10:43:02.843577   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 21/120
	I0916 10:43:03.845160   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 22/120
	I0916 10:43:04.846381   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 23/120
	I0916 10:43:05.848491   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 24/120
	I0916 10:43:06.850410   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 25/120
	I0916 10:43:07.851731   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 26/120
	I0916 10:43:08.852900   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 27/120
	I0916 10:43:09.854245   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 28/120
	I0916 10:43:10.855410   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 29/120
	I0916 10:43:11.857576   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 30/120
	I0916 10:43:12.859624   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 31/120
	I0916 10:43:13.861407   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 32/120
	I0916 10:43:14.863681   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 33/120
	I0916 10:43:15.865011   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 34/120
	I0916 10:43:16.866853   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 35/120
	I0916 10:43:17.868188   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 36/120
	I0916 10:43:18.869724   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 37/120
	I0916 10:43:19.871131   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 38/120
	I0916 10:43:20.872466   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 39/120
	I0916 10:43:21.874587   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 40/120
	I0916 10:43:22.875817   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 41/120
	I0916 10:43:23.878093   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 42/120
	I0916 10:43:24.879535   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 43/120
	I0916 10:43:25.880963   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 44/120
	I0916 10:43:26.882910   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 45/120
	I0916 10:43:27.885154   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 46/120
	I0916 10:43:28.886412   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 47/120
	I0916 10:43:29.887705   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 48/120
	I0916 10:43:30.889020   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 49/120
	I0916 10:43:31.891260   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 50/120
	I0916 10:43:32.892610   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 51/120
	I0916 10:43:33.894121   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 52/120
	I0916 10:43:34.895765   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 53/120
	I0916 10:43:35.897165   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 54/120
	I0916 10:43:36.899456   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 55/120
	I0916 10:43:37.900951   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 56/120
	I0916 10:43:38.902141   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 57/120
	I0916 10:43:39.904008   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 58/120
	I0916 10:43:40.905381   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 59/120
	I0916 10:43:41.907546   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 60/120
	I0916 10:43:42.908866   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 61/120
	I0916 10:43:43.910167   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 62/120
	I0916 10:43:44.911519   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 63/120
	I0916 10:43:45.912878   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 64/120
	I0916 10:43:46.914693   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 65/120
	I0916 10:43:47.915951   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 66/120
	I0916 10:43:48.917255   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 67/120
	I0916 10:43:49.919430   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 68/120
	I0916 10:43:50.920849   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 69/120
	I0916 10:43:51.922624   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 70/120
	I0916 10:43:52.924074   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 71/120
	I0916 10:43:53.926104   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 72/120
	I0916 10:43:54.927412   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 73/120
	I0916 10:43:55.928694   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 74/120
	I0916 10:43:56.930659   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 75/120
	I0916 10:43:57.931975   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 76/120
	I0916 10:43:58.933389   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 77/120
	I0916 10:43:59.934701   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 78/120
	I0916 10:44:00.936032   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 79/120
	I0916 10:44:01.937796   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 80/120
	I0916 10:44:02.939291   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 81/120
	I0916 10:44:03.940728   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 82/120
	I0916 10:44:04.942105   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 83/120
	I0916 10:44:05.943359   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 84/120
	I0916 10:44:06.944903   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 85/120
	I0916 10:44:07.946326   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 86/120
	I0916 10:44:08.947724   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 87/120
	I0916 10:44:09.949274   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 88/120
	I0916 10:44:10.951521   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 89/120
	I0916 10:44:11.953366   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 90/120
	I0916 10:44:12.954643   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 91/120
	I0916 10:44:13.955892   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 92/120
	I0916 10:44:14.957168   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 93/120
	I0916 10:44:15.959091   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 94/120
	I0916 10:44:16.961238   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 95/120
	I0916 10:44:17.962560   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 96/120
	I0916 10:44:18.964655   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 97/120
	I0916 10:44:19.966032   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 98/120
	I0916 10:44:20.967570   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 99/120
	I0916 10:44:21.969705   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 100/120
	I0916 10:44:22.971695   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 101/120
	I0916 10:44:23.972978   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 102/120
	I0916 10:44:24.974446   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 103/120
	I0916 10:44:25.976040   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 104/120
	I0916 10:44:26.977930   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 105/120
	I0916 10:44:27.979729   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 106/120
	I0916 10:44:28.981156   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 107/120
	I0916 10:44:29.982514   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 108/120
	I0916 10:44:30.983814   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 109/120
	I0916 10:44:31.985748   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 110/120
	I0916 10:44:32.987604   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 111/120
	I0916 10:44:33.989217   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 112/120
	I0916 10:44:34.990674   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 113/120
	I0916 10:44:35.992644   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 114/120
	I0916 10:44:36.994772   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 115/120
	I0916 10:44:37.996203   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 116/120
	I0916 10:44:38.997673   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 117/120
	I0916 10:44:39.999126   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 118/120
	I0916 10:44:41.000514   26166 main.go:141] libmachine: (ha-244475-m02) Waiting for machine to stop 119/120
	I0916 10:44:42.001150   26166 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 10:44:42.001351   26166 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-244475 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (19.088924597s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:44:42.047652   26609 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:44:42.047817   26609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:44:42.047829   26609 out.go:358] Setting ErrFile to fd 2...
	I0916 10:44:42.047836   26609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:44:42.048127   26609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:44:42.048412   26609 out.go:352] Setting JSON to false
	I0916 10:44:42.048450   26609 mustload.go:65] Loading cluster: ha-244475
	I0916 10:44:42.048553   26609 notify.go:220] Checking for updates...
	I0916 10:44:42.049032   26609 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:44:42.049051   26609 status.go:255] checking status of ha-244475 ...
	I0916 10:44:42.049654   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:44:42.049715   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:44:42.066494   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0916 10:44:42.067006   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:44:42.067591   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:44:42.067610   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:44:42.067988   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:44:42.068185   26609 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:44:42.069729   26609 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:44:42.069745   26609 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:44:42.070093   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:44:42.070141   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:44:42.085041   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0916 10:44:42.085362   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:44:42.085793   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:44:42.085812   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:44:42.086121   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:44:42.086297   26609 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:44:42.088941   26609 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:44:42.089397   26609 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:44:42.089431   26609 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:44:42.089581   26609 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:44:42.089894   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:44:42.089935   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:44:42.104064   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0916 10:44:42.104463   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:44:42.104894   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:44:42.104917   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:44:42.105223   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:44:42.105385   26609 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:44:42.105576   26609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:42.105604   26609 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:44:42.107912   26609 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:44:42.108286   26609 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:44:42.108313   26609 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:44:42.108407   26609 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:44:42.108586   26609 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:44:42.108738   26609 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:44:42.108899   26609 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:44:42.194346   26609 ssh_runner.go:195] Run: systemctl --version
	I0916 10:44:42.202551   26609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:44:42.221248   26609 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:44:42.221287   26609 api_server.go:166] Checking apiserver status ...
	I0916 10:44:42.221321   26609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:44:42.237672   26609 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:44:42.247318   26609 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:44:42.247389   26609 ssh_runner.go:195] Run: ls
	I0916 10:44:42.252037   26609 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:44:42.258225   26609 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:44:42.258251   26609 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:44:42.258274   26609 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:44:42.258305   26609 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:44:42.258606   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:44:42.258643   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:44:42.273524   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I0916 10:44:42.274044   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:44:42.274538   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:44:42.274560   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:44:42.274856   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:44:42.275031   26609 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:44:42.276576   26609 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:44:42.276606   26609 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:44:42.276899   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:44:42.276930   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:44:42.292617   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0916 10:44:42.292983   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:44:42.293506   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:44:42.293529   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:44:42.293908   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:44:42.294099   26609 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:44:42.296901   26609 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:44:42.297362   26609 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:44:42.297395   26609 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:44:42.297503   26609 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:44:42.297825   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:44:42.297860   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:44:42.313565   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0916 10:44:42.313978   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:44:42.314422   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:44:42.314441   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:44:42.314805   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:44:42.315033   26609 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:44:42.315218   26609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:42.315245   26609 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:44:42.317966   26609 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:44:42.318462   26609 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:44:42.318488   26609 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:44:42.318628   26609 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:44:42.318786   26609 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:44:42.318906   26609 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:44:42.319016   26609 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	W0916 10:45:00.729326   26609 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:00.729447   26609 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0916 10:45:00.729470   26609 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:00.729488   26609 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:45:00.729513   26609 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:00.729525   26609 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:00.729846   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:00.729896   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:00.744536   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0916 10:45:00.745019   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:00.745506   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:45:00.745528   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:00.745901   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:00.746084   26609 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:00.747725   26609 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:00.747741   26609 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:00.748077   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:00.748115   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:00.763181   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0916 10:45:00.763702   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:00.764162   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:45:00.764183   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:00.764482   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:00.764655   26609 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:00.767736   26609 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:00.768118   26609 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:00.768143   26609 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:00.768324   26609 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:00.768655   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:00.768723   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:00.783769   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0916 10:45:00.784200   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:00.784742   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:45:00.784762   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:00.785064   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:00.785286   26609 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:00.785493   26609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:00.785516   26609 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:00.788238   26609 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:00.788659   26609 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:00.788683   26609 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:00.788840   26609 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:00.789030   26609 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:00.789194   26609 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:00.789321   26609 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:00.875311   26609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:00.894358   26609 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:00.894386   26609 api_server.go:166] Checking apiserver status ...
	I0916 10:45:00.894424   26609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:00.911005   26609 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:00.920750   26609 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:00.920800   26609 ssh_runner.go:195] Run: ls
	I0916 10:45:00.926258   26609 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:00.932165   26609 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:00.932187   26609 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:00.932195   26609 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:00.932209   26609 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:00.932514   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:00.932548   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:00.947275   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I0916 10:45:00.947706   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:00.948372   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:45:00.948402   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:00.948726   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:00.948915   26609 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:00.950362   26609 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:00.950377   26609 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:00.950660   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:00.950691   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:00.965035   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0916 10:45:00.965474   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:00.965899   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:45:00.965922   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:00.966234   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:00.966395   26609 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:00.968949   26609 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:00.969322   26609 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:00.969350   26609 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:00.969466   26609 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:00.969747   26609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:00.969778   26609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:00.984353   26609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0916 10:45:00.984841   26609 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:00.985328   26609 main.go:141] libmachine: Using API Version  1
	I0916 10:45:00.985354   26609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:00.985682   26609 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:00.985887   26609 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:00.986067   26609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:00.986087   26609 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:00.988504   26609 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:00.988889   26609 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:00.988916   26609 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:00.989008   26609 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:00.989190   26609 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:00.989331   26609 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:00.989443   26609 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:01.073717   26609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:01.090100   26609 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.425741322s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m03_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m04 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp testdata/cp-test.txt                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m03 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-244475 node stop m02 -v=7                                                     | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:38:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:38:12.200712   22121 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:38:12.200823   22121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:38:12.200832   22121 out.go:358] Setting ErrFile to fd 2...
	I0916 10:38:12.200836   22121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:38:12.201073   22121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:38:12.201666   22121 out.go:352] Setting JSON to false
	I0916 10:38:12.202552   22121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1242,"bootTime":1726481850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:38:12.202649   22121 start.go:139] virtualization: kvm guest
	I0916 10:38:12.204909   22121 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:38:12.206153   22121 notify.go:220] Checking for updates...
	I0916 10:38:12.206162   22121 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:38:12.207508   22121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:38:12.208635   22121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:12.209868   22121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.211054   22121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:38:12.212157   22121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:38:12.213282   22121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:38:12.247704   22121 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:38:12.248934   22121 start.go:297] selected driver: kvm2
	I0916 10:38:12.248946   22121 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:38:12.248965   22121 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:38:12.249634   22121 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:12.249717   22121 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:38:12.264515   22121 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:38:12.264557   22121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:38:12.264783   22121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:38:12.264813   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:12.264852   22121 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:38:12.264862   22121 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:38:12.264904   22121 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:12.264991   22121 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:12.266715   22121 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:38:12.267811   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:12.267865   22121 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:38:12.267877   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:12.267958   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:12.267971   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:12.268264   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:12.268287   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json: {Name:mk850b432e3492662a38e4b0f11a836bf86e02aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:12.268433   22121 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:38:12.268468   22121 start.go:364] duration metric: took 18.641µs to acquireMachinesLock for "ha-244475"
	I0916 10:38:12.268490   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:12.268553   22121 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:38:12.270059   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:38:12.270184   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:12.270223   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:12.284586   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0916 10:38:12.285055   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:12.285574   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:12.285594   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:12.285978   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:12.286124   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:12.286277   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:12.286414   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:38:12.286438   22121 client.go:168] LocalClient.Create starting
	I0916 10:38:12.286467   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:38:12.286500   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:12.286515   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:12.286575   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:38:12.286594   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:12.286606   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:12.286627   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:38:12.286639   22121 main.go:141] libmachine: (ha-244475) Calling .PreCreateCheck
	I0916 10:38:12.286973   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:12.287297   22121 main.go:141] libmachine: Creating machine...
	I0916 10:38:12.287309   22121 main.go:141] libmachine: (ha-244475) Calling .Create
	I0916 10:38:12.287457   22121 main.go:141] libmachine: (ha-244475) Creating KVM machine...
	I0916 10:38:12.288681   22121 main.go:141] libmachine: (ha-244475) DBG | found existing default KVM network
	I0916 10:38:12.289333   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.289200   22144 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0916 10:38:12.289353   22121 main.go:141] libmachine: (ha-244475) DBG | created network xml: 
	I0916 10:38:12.289365   22121 main.go:141] libmachine: (ha-244475) DBG | <network>
	I0916 10:38:12.289372   22121 main.go:141] libmachine: (ha-244475) DBG |   <name>mk-ha-244475</name>
	I0916 10:38:12.289384   22121 main.go:141] libmachine: (ha-244475) DBG |   <dns enable='no'/>
	I0916 10:38:12.289392   22121 main.go:141] libmachine: (ha-244475) DBG |   
	I0916 10:38:12.289404   22121 main.go:141] libmachine: (ha-244475) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:38:12.289414   22121 main.go:141] libmachine: (ha-244475) DBG |     <dhcp>
	I0916 10:38:12.289426   22121 main.go:141] libmachine: (ha-244475) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:38:12.289440   22121 main.go:141] libmachine: (ha-244475) DBG |     </dhcp>
	I0916 10:38:12.289470   22121 main.go:141] libmachine: (ha-244475) DBG |   </ip>
	I0916 10:38:12.289491   22121 main.go:141] libmachine: (ha-244475) DBG |   
	I0916 10:38:12.289503   22121 main.go:141] libmachine: (ha-244475) DBG | </network>
	I0916 10:38:12.289512   22121 main.go:141] libmachine: (ha-244475) DBG | 
	I0916 10:38:12.294272   22121 main.go:141] libmachine: (ha-244475) DBG | trying to create private KVM network mk-ha-244475 192.168.39.0/24...
	I0916 10:38:12.356537   22121 main.go:141] libmachine: (ha-244475) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 ...
	I0916 10:38:12.356564   22121 main.go:141] libmachine: (ha-244475) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:38:12.356583   22121 main.go:141] libmachine: (ha-244475) DBG | private KVM network mk-ha-244475 192.168.39.0/24 created
	I0916 10:38:12.356612   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.356478   22144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.356634   22121 main.go:141] libmachine: (ha-244475) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:38:12.603819   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.603693   22144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa...
	I0916 10:38:12.714132   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.713994   22144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/ha-244475.rawdisk...
	I0916 10:38:12.714162   22121 main.go:141] libmachine: (ha-244475) DBG | Writing magic tar header
	I0916 10:38:12.714174   22121 main.go:141] libmachine: (ha-244475) DBG | Writing SSH key tar header
	I0916 10:38:12.714185   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.714123   22144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 ...
	I0916 10:38:12.714208   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475
	I0916 10:38:12.714276   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 (perms=drwx------)
	I0916 10:38:12.714299   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:38:12.714310   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:38:12.714346   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:38:12.714364   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.714379   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:38:12.714393   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:38:12.714412   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:38:12.714424   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:38:12.714456   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:38:12.714472   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:38:12.714480   22121 main.go:141] libmachine: (ha-244475) Creating domain...
	I0916 10:38:12.714493   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home
	I0916 10:38:12.714503   22121 main.go:141] libmachine: (ha-244475) DBG | Skipping /home - not owner
	I0916 10:38:12.715516   22121 main.go:141] libmachine: (ha-244475) define libvirt domain using xml: 
	I0916 10:38:12.715535   22121 main.go:141] libmachine: (ha-244475) <domain type='kvm'>
	I0916 10:38:12.715541   22121 main.go:141] libmachine: (ha-244475)   <name>ha-244475</name>
	I0916 10:38:12.715549   22121 main.go:141] libmachine: (ha-244475)   <memory unit='MiB'>2200</memory>
	I0916 10:38:12.715560   22121 main.go:141] libmachine: (ha-244475)   <vcpu>2</vcpu>
	I0916 10:38:12.715567   22121 main.go:141] libmachine: (ha-244475)   <features>
	I0916 10:38:12.715594   22121 main.go:141] libmachine: (ha-244475)     <acpi/>
	I0916 10:38:12.715613   22121 main.go:141] libmachine: (ha-244475)     <apic/>
	I0916 10:38:12.715643   22121 main.go:141] libmachine: (ha-244475)     <pae/>
	I0916 10:38:12.715667   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.715677   22121 main.go:141] libmachine: (ha-244475)   </features>
	I0916 10:38:12.715691   22121 main.go:141] libmachine: (ha-244475)   <cpu mode='host-passthrough'>
	I0916 10:38:12.715701   22121 main.go:141] libmachine: (ha-244475)   
	I0916 10:38:12.715709   22121 main.go:141] libmachine: (ha-244475)   </cpu>
	I0916 10:38:12.715717   22121 main.go:141] libmachine: (ha-244475)   <os>
	I0916 10:38:12.715726   22121 main.go:141] libmachine: (ha-244475)     <type>hvm</type>
	I0916 10:38:12.715737   22121 main.go:141] libmachine: (ha-244475)     <boot dev='cdrom'/>
	I0916 10:38:12.715746   22121 main.go:141] libmachine: (ha-244475)     <boot dev='hd'/>
	I0916 10:38:12.715758   22121 main.go:141] libmachine: (ha-244475)     <bootmenu enable='no'/>
	I0916 10:38:12.715788   22121 main.go:141] libmachine: (ha-244475)   </os>
	I0916 10:38:12.715799   22121 main.go:141] libmachine: (ha-244475)   <devices>
	I0916 10:38:12.715810   22121 main.go:141] libmachine: (ha-244475)     <disk type='file' device='cdrom'>
	I0916 10:38:12.715840   22121 main.go:141] libmachine: (ha-244475)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/boot2docker.iso'/>
	I0916 10:38:12.715852   22121 main.go:141] libmachine: (ha-244475)       <target dev='hdc' bus='scsi'/>
	I0916 10:38:12.715861   22121 main.go:141] libmachine: (ha-244475)       <readonly/>
	I0916 10:38:12.715870   22121 main.go:141] libmachine: (ha-244475)     </disk>
	I0916 10:38:12.715875   22121 main.go:141] libmachine: (ha-244475)     <disk type='file' device='disk'>
	I0916 10:38:12.715881   22121 main.go:141] libmachine: (ha-244475)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:38:12.715891   22121 main.go:141] libmachine: (ha-244475)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/ha-244475.rawdisk'/>
	I0916 10:38:12.715896   22121 main.go:141] libmachine: (ha-244475)       <target dev='hda' bus='virtio'/>
	I0916 10:38:12.715903   22121 main.go:141] libmachine: (ha-244475)     </disk>
	I0916 10:38:12.715907   22121 main.go:141] libmachine: (ha-244475)     <interface type='network'>
	I0916 10:38:12.715914   22121 main.go:141] libmachine: (ha-244475)       <source network='mk-ha-244475'/>
	I0916 10:38:12.715918   22121 main.go:141] libmachine: (ha-244475)       <model type='virtio'/>
	I0916 10:38:12.715925   22121 main.go:141] libmachine: (ha-244475)     </interface>
	I0916 10:38:12.715929   22121 main.go:141] libmachine: (ha-244475)     <interface type='network'>
	I0916 10:38:12.715936   22121 main.go:141] libmachine: (ha-244475)       <source network='default'/>
	I0916 10:38:12.715941   22121 main.go:141] libmachine: (ha-244475)       <model type='virtio'/>
	I0916 10:38:12.715946   22121 main.go:141] libmachine: (ha-244475)     </interface>
	I0916 10:38:12.715950   22121 main.go:141] libmachine: (ha-244475)     <serial type='pty'>
	I0916 10:38:12.715966   22121 main.go:141] libmachine: (ha-244475)       <target port='0'/>
	I0916 10:38:12.715977   22121 main.go:141] libmachine: (ha-244475)     </serial>
	I0916 10:38:12.715987   22121 main.go:141] libmachine: (ha-244475)     <console type='pty'>
	I0916 10:38:12.715998   22121 main.go:141] libmachine: (ha-244475)       <target type='serial' port='0'/>
	I0916 10:38:12.716016   22121 main.go:141] libmachine: (ha-244475)     </console>
	I0916 10:38:12.716026   22121 main.go:141] libmachine: (ha-244475)     <rng model='virtio'>
	I0916 10:38:12.716036   22121 main.go:141] libmachine: (ha-244475)       <backend model='random'>/dev/random</backend>
	I0916 10:38:12.716045   22121 main.go:141] libmachine: (ha-244475)     </rng>
	I0916 10:38:12.716065   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.716082   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.716090   22121 main.go:141] libmachine: (ha-244475)   </devices>
	I0916 10:38:12.716101   22121 main.go:141] libmachine: (ha-244475) </domain>
	I0916 10:38:12.716111   22121 main.go:141] libmachine: (ha-244475) 
	I0916 10:38:12.720528   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:4e:1b:22 in network default
	I0916 10:38:12.721005   22121 main.go:141] libmachine: (ha-244475) Ensuring networks are active...
	I0916 10:38:12.721018   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:12.721698   22121 main.go:141] libmachine: (ha-244475) Ensuring network default is active
	I0916 10:38:12.722026   22121 main.go:141] libmachine: (ha-244475) Ensuring network mk-ha-244475 is active
	I0916 10:38:12.722616   22121 main.go:141] libmachine: (ha-244475) Getting domain xml...
	I0916 10:38:12.723368   22121 main.go:141] libmachine: (ha-244475) Creating domain...
	I0916 10:38:13.892889   22121 main.go:141] libmachine: (ha-244475) Waiting to get IP...
	I0916 10:38:13.893726   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:13.894130   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:13.894170   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:13.894127   22144 retry.go:31] will retry after 194.671276ms: waiting for machine to come up
	I0916 10:38:14.090477   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.090800   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.090825   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.090753   22144 retry.go:31] will retry after 351.659131ms: waiting for machine to come up
	I0916 10:38:14.444409   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.444864   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.444896   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.444830   22144 retry.go:31] will retry after 382.219059ms: waiting for machine to come up
	I0916 10:38:14.828362   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.828800   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.828826   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.828748   22144 retry.go:31] will retry after 385.017595ms: waiting for machine to come up
	I0916 10:38:15.215350   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:15.215732   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:15.215758   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:15.215688   22144 retry.go:31] will retry after 603.255872ms: waiting for machine to come up
	I0916 10:38:15.820323   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:15.820668   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:15.820694   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:15.820630   22144 retry.go:31] will retry after 768.911433ms: waiting for machine to come up
	I0916 10:38:16.591945   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:16.592337   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:16.592361   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:16.592300   22144 retry.go:31] will retry after 1.01448771s: waiting for machine to come up
	I0916 10:38:17.607844   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:17.608259   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:17.608281   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:17.608225   22144 retry.go:31] will retry after 1.028283296s: waiting for machine to come up
	I0916 10:38:18.638495   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:18.638879   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:18.638909   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:18.638842   22144 retry.go:31] will retry after 1.806716733s: waiting for machine to come up
	I0916 10:38:20.447563   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:20.447961   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:20.447980   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:20.447880   22144 retry.go:31] will retry after 2.186647075s: waiting for machine to come up
	I0916 10:38:22.636294   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:22.636702   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:22.636728   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:22.636657   22144 retry.go:31] will retry after 2.089501385s: waiting for machine to come up
	I0916 10:38:24.728099   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:24.728486   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:24.728515   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:24.728423   22144 retry.go:31] will retry after 2.189050091s: waiting for machine to come up
	I0916 10:38:26.918420   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:26.918845   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:26.918870   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:26.918800   22144 retry.go:31] will retry after 2.857721999s: waiting for machine to come up
	I0916 10:38:29.779219   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:29.779636   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:29.779664   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:29.779599   22144 retry.go:31] will retry after 5.359183826s: waiting for machine to come up
	I0916 10:38:35.141883   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.142271   22121 main.go:141] libmachine: (ha-244475) Found IP for machine: 192.168.39.19
	I0916 10:38:35.142292   22121 main.go:141] libmachine: (ha-244475) Reserving static IP address...
	I0916 10:38:35.142311   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has current primary IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.142733   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find host DHCP lease matching {name: "ha-244475", mac: "52:54:00:31:d1:43", ip: "192.168.39.19"} in network mk-ha-244475
	I0916 10:38:35.214446   22121 main.go:141] libmachine: (ha-244475) DBG | Getting to WaitForSSH function...
	I0916 10:38:35.214471   22121 main.go:141] libmachine: (ha-244475) Reserved static IP address: 192.168.39.19
	I0916 10:38:35.214482   22121 main.go:141] libmachine: (ha-244475) Waiting for SSH to be available...
	I0916 10:38:35.216924   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.217367   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.217394   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.217529   22121 main.go:141] libmachine: (ha-244475) DBG | Using SSH client type: external
	I0916 10:38:35.217557   22121 main.go:141] libmachine: (ha-244475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa (-rw-------)
	I0916 10:38:35.217585   22121 main.go:141] libmachine: (ha-244475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:38:35.217594   22121 main.go:141] libmachine: (ha-244475) DBG | About to run SSH command:
	I0916 10:38:35.217608   22121 main.go:141] libmachine: (ha-244475) DBG | exit 0
	I0916 10:38:35.349373   22121 main.go:141] libmachine: (ha-244475) DBG | SSH cmd err, output: <nil>: 
	I0916 10:38:35.349683   22121 main.go:141] libmachine: (ha-244475) KVM machine creation complete!
	I0916 10:38:35.349969   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:35.350496   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:35.350688   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:35.350823   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:38:35.350834   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:35.351935   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:38:35.351949   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:38:35.351954   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:38:35.351959   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.353913   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.354208   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.354235   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.354319   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.354463   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.354605   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.354695   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.354841   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.355041   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.355053   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:38:35.464485   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:35.464507   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:38:35.464514   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.467101   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.467423   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.467458   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.467566   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.467765   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.467917   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.468144   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.468285   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.468476   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.468489   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:38:35.582051   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:38:35.582131   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:38:35.582143   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:38:35.582154   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.582407   22121 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:38:35.582432   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.582675   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.585276   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.585633   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.585660   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.585766   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.585943   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.586081   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.586209   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.586353   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.586554   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.586566   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:38:35.712268   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:38:35.712302   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.715043   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.715376   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.715404   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.715689   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.715894   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.716072   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.716203   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.716355   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.716526   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.716543   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:38:35.838701   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:35.838734   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:38:35.838786   22121 buildroot.go:174] setting up certificates
	I0916 10:38:35.838795   22121 provision.go:84] configureAuth start
	I0916 10:38:35.838807   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.839053   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:35.842260   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.842666   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.842713   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.842874   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.845198   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.845480   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.845503   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.845681   22121 provision.go:143] copyHostCerts
	I0916 10:38:35.845727   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:38:35.845766   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:38:35.845777   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:38:35.845857   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:38:35.845945   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:38:35.845971   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:38:35.845975   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:38:35.846004   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:38:35.846056   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:38:35.846073   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:38:35.846079   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:38:35.846099   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:38:35.846153   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:38:35.972514   22121 provision.go:177] copyRemoteCerts
	I0916 10:38:35.972572   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:38:35.972592   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.975467   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.975802   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.975829   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.976035   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.976192   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.976307   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.976395   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.064079   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:38:36.064162   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:38:36.088374   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:38:36.088445   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:38:36.112864   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:38:36.112943   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:38:36.137799   22121 provision.go:87] duration metric: took 298.990788ms to configureAuth
	I0916 10:38:36.137824   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:38:36.137990   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:36.138068   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.140775   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.141141   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.141167   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.141370   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.141557   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.141711   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.141862   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.142012   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:36.142173   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:36.142190   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:38:36.366260   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:38:36.366288   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:38:36.366297   22121 main.go:141] libmachine: (ha-244475) Calling .GetURL
	I0916 10:38:36.367546   22121 main.go:141] libmachine: (ha-244475) DBG | Using libvirt version 6000000
	I0916 10:38:36.369543   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.369862   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.369884   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.370034   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:38:36.370047   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:38:36.370054   22121 client.go:171] duration metric: took 24.083609722s to LocalClient.Create
	I0916 10:38:36.370077   22121 start.go:167] duration metric: took 24.083661787s to libmachine.API.Create "ha-244475"
	I0916 10:38:36.370089   22121 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:38:36.370118   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:38:36.370140   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.370345   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:38:36.370363   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.372350   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.372637   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.372658   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.372800   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.372958   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.373108   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.373239   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.459818   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:38:36.464279   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:38:36.464304   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:38:36.464360   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:38:36.464428   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:38:36.464436   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:38:36.464531   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:38:36.474459   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:38:36.498853   22121 start.go:296] duration metric: took 128.751453ms for postStartSetup
	I0916 10:38:36.498905   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:36.499551   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:36.502104   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.502435   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.502456   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.502764   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:36.502952   22121 start.go:128] duration metric: took 24.234389874s to createHost
	I0916 10:38:36.502971   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.505214   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.505496   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.505513   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.505660   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.505815   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.505951   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.506052   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.506165   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:36.506383   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:36.506406   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:38:36.618115   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483116.595653625
	
	I0916 10:38:36.618143   22121 fix.go:216] guest clock: 1726483116.595653625
	I0916 10:38:36.618151   22121 fix.go:229] Guest: 2024-09-16 10:38:36.595653625 +0000 UTC Remote: 2024-09-16 10:38:36.502962795 +0000 UTC m=+24.335728547 (delta=92.69083ms)
	I0916 10:38:36.618190   22121 fix.go:200] guest clock delta is within tolerance: 92.69083ms
	I0916 10:38:36.618197   22121 start.go:83] releasing machines lock for "ha-244475", held for 24.349718291s
	I0916 10:38:36.618226   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.618490   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:36.621177   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.621552   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.621576   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.621715   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622182   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622349   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622457   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:38:36.622504   22121 ssh_runner.go:195] Run: cat /version.json
	I0916 10:38:36.622532   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.622507   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.625311   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625336   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625701   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.625729   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625752   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.625773   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625849   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.625996   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.626070   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.626190   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.626226   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.626304   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.626347   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.626412   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.731813   22121 ssh_runner.go:195] Run: systemctl --version
	I0916 10:38:36.738034   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:38:36.897823   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:38:36.903947   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:38:36.904037   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:36.920981   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:38:36.921002   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:38:36.921062   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:38:36.936473   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:38:36.950885   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:38:36.950937   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:38:36.965062   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:38:36.979049   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:38:37.089419   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:38:37.234470   22121 docker.go:233] disabling docker service ...
	I0916 10:38:37.234570   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:38:37.249643   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:38:37.263395   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:38:37.396923   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:38:37.530822   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:38:37.545513   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:38:37.564576   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:38:37.564639   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.575771   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:38:37.575830   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.586212   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.597160   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.607962   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:38:37.619040   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.630000   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.647480   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.658746   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:38:37.668801   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:38:37.668864   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:38:37.683050   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:38:37.693269   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:37.804210   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:38:37.895246   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:38:37.895322   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:38:37.900048   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:38:37.900102   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:38:37.903675   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:38:37.941447   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:38:37.941534   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:38:37.969936   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:38:38.002089   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:38:38.003428   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:38.006180   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:38.006490   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:38.006513   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:38.006728   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:38:38.011175   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:38.024444   22121 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:38:38.024541   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:38.024583   22121 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:38:38.057652   22121 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:38:38.057726   22121 ssh_runner.go:195] Run: which lz4
	I0916 10:38:38.061778   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 10:38:38.061885   22121 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:38:38.066142   22121 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:38:38.066169   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:38:39.414979   22121 crio.go:462] duration metric: took 1.353135329s to copy over tarball
	I0916 10:38:39.415060   22121 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:38:41.361544   22121 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.94645378s)
	I0916 10:38:41.361572   22121 crio.go:469] duration metric: took 1.946564398s to extract the tarball
	I0916 10:38:41.361580   22121 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:38:41.398599   22121 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:38:41.443342   22121 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:38:41.443365   22121 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:38:41.443372   22121 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:38:41.443503   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:38:41.443571   22121 ssh_runner.go:195] Run: crio config
	I0916 10:38:41.489336   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:41.489363   22121 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:38:41.489374   22121 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:38:41.489401   22121 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:38:41.489526   22121 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:38:41.489548   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:38:41.489586   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:38:41.505696   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:38:41.505807   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:38:41.505873   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:38:41.516304   22121 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:38:41.516364   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:38:41.525992   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:38:41.542448   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:38:41.558743   22121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:38:41.575779   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 10:38:41.592567   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:38:41.596480   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:41.608839   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:41.718297   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:41.736212   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:38:41.736238   22121 certs.go:194] generating shared ca certs ...
	I0916 10:38:41.736259   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.736446   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:38:41.736500   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:38:41.736517   22121 certs.go:256] generating profile certs ...
	I0916 10:38:41.736581   22121 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:38:41.736604   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt with IP's: []
	I0916 10:38:41.887766   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt ...
	I0916 10:38:41.887792   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt: {Name:mkeee24c57991a4cf2957d59b85c7dbd3c8f2331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.887965   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key ...
	I0916 10:38:41.887976   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key: {Name:mkec5e765e721654d343964b8e5f1903226a6b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.888056   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6
	I0916 10:38:41.888070   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I0916 10:38:42.038292   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 ...
	I0916 10:38:42.038321   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6: {Name:mk7099a2c62f50aa06662b965a0c9069ae5d1f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.038481   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6 ...
	I0916 10:38:42.038493   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6: {Name:mkcc105b422dfe70444931267745dbca1edf49bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.038566   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:38:42.038652   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:38:42.038706   22121 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:38:42.038720   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt with IP's: []
	I0916 10:38:42.190304   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt ...
	I0916 10:38:42.190334   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt: {Name:mk8f534095f1a4c3c0f97ea592b35a6ed96cf75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.190493   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key ...
	I0916 10:38:42.190504   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key: {Name:mkb1fc3820bed6bb42a1e04c6b2b6ddfc43271a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.190577   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:38:42.190595   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:38:42.190607   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:38:42.190620   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:38:42.190630   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:38:42.190643   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:38:42.190653   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:38:42.190665   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:38:42.190709   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:38:42.190745   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:38:42.190754   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:38:42.190774   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:38:42.190818   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:38:42.190848   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:38:42.190886   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:38:42.190919   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.190932   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.190944   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.191452   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:38:42.217887   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:38:42.242446   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:38:42.266461   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:38:42.289939   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:38:42.313172   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:38:42.337118   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:38:42.360742   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:38:42.383602   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:38:42.406581   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:38:42.429672   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:38:42.452865   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:38:42.469058   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:38:42.474734   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:38:42.485883   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.490265   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.490308   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.495983   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:38:42.510198   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:38:42.521298   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.527236   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.527293   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.533552   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:38:42.549332   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:38:42.561819   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.568456   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.568516   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.575583   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:38:42.586818   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:38:42.590763   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:38:42.590815   22121 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:42.590883   22121 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:38:42.590943   22121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:38:42.628496   22121 cri.go:89] found id: ""
	I0916 10:38:42.628553   22121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:38:42.638691   22121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:38:42.648671   22121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:38:42.658424   22121 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:38:42.658444   22121 kubeadm.go:157] found existing configuration files:
	
	I0916 10:38:42.658483   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:38:42.667543   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:38:42.667594   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:38:42.677200   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:38:42.686120   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:38:42.686169   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:38:42.695575   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:38:42.704585   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:38:42.704673   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:38:42.714549   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:38:42.723658   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:38:42.723715   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:38:42.733164   22121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:38:42.842015   22121 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:38:42.842090   22121 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:38:42.961804   22121 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:38:42.961936   22121 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:38:42.962041   22121 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:38:42.973403   22121 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:38:42.975286   22121 out.go:235]   - Generating certificates and keys ...
	I0916 10:38:42.975379   22121 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:38:42.975457   22121 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:38:43.030083   22121 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:38:43.295745   22121 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:38:43.465239   22121 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:38:43.533050   22121 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:38:43.596361   22121 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:38:43.596500   22121 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-244475 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0916 10:38:43.798754   22121 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:38:43.798893   22121 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-244475 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0916 10:38:43.873275   22121 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:38:44.075110   22121 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:38:44.129628   22121 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:38:44.129726   22121 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:38:44.322901   22121 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:38:44.558047   22121 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:38:44.903170   22121 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:38:45.001802   22121 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:38:45.146307   22121 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:38:45.146914   22121 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:38:45.150330   22121 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:38:45.152199   22121 out.go:235]   - Booting up control plane ...
	I0916 10:38:45.152314   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:38:45.152406   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:38:45.152956   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:38:45.168296   22121 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:38:45.176973   22121 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:38:45.177059   22121 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:38:45.314163   22121 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:38:45.314301   22121 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:38:45.816204   22121 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.333685ms
	I0916 10:38:45.816311   22121 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:38:51.792476   22121 kubeadm.go:310] [api-check] The API server is healthy after 5.978803709s
	I0916 10:38:51.807629   22121 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:38:51.827911   22121 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:38:51.862228   22121 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:38:51.862446   22121 kubeadm.go:310] [mark-control-plane] Marking the node ha-244475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:38:51.880371   22121 kubeadm.go:310] [bootstrap-token] Using token: z03lik.8myj2g1lawnpsxwz
	I0916 10:38:51.881728   22121 out.go:235]   - Configuring RBAC rules ...
	I0916 10:38:51.881867   22121 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:38:51.892035   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:38:51.905643   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:38:51.910644   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:38:51.914471   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:38:51.919085   22121 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:38:52.199036   22121 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:38:52.641913   22121 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:38:53.198817   22121 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:38:53.200731   22121 kubeadm.go:310] 
	I0916 10:38:53.200796   22121 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:38:53.200801   22121 kubeadm.go:310] 
	I0916 10:38:53.200897   22121 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:38:53.200923   22121 kubeadm.go:310] 
	I0916 10:38:53.200967   22121 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:38:53.201048   22121 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:38:53.201151   22121 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:38:53.201169   22121 kubeadm.go:310] 
	I0916 10:38:53.201241   22121 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:38:53.201252   22121 kubeadm.go:310] 
	I0916 10:38:53.201327   22121 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:38:53.201342   22121 kubeadm.go:310] 
	I0916 10:38:53.201417   22121 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:38:53.201524   22121 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:38:53.201620   22121 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:38:53.201636   22121 kubeadm.go:310] 
	I0916 10:38:53.201729   22121 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:38:53.201854   22121 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:38:53.201865   22121 kubeadm.go:310] 
	I0916 10:38:53.201980   22121 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z03lik.8myj2g1lawnpsxwz \
	I0916 10:38:53.202117   22121 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:38:53.202140   22121 kubeadm.go:310] 	--control-plane 
	I0916 10:38:53.202144   22121 kubeadm.go:310] 
	I0916 10:38:53.202267   22121 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:38:53.202284   22121 kubeadm.go:310] 
	I0916 10:38:53.202396   22121 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z03lik.8myj2g1lawnpsxwz \
	I0916 10:38:53.202519   22121 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:38:53.204612   22121 kubeadm.go:310] W0916 10:38:42.823368     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:38:53.204909   22121 kubeadm.go:310] W0916 10:38:42.824196     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:38:53.205016   22121 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:38:53.205039   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:53.205046   22121 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:38:53.206707   22121 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:38:53.207859   22121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:38:53.213780   22121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:38:53.213797   22121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:38:53.232952   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:38:53.644721   22121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:38:53.644772   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:53.644775   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475 minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=true
	I0916 10:38:53.828940   22121 ops.go:34] apiserver oom_adj: -16
	I0916 10:38:53.829033   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:54.329149   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:54.829567   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:55.329641   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:55.829630   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:56.329847   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:56.829468   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:57.329221   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:57.464394   22121 kubeadm.go:1113] duration metric: took 3.819679278s to wait for elevateKubeSystemPrivileges
	I0916 10:38:57.464429   22121 kubeadm.go:394] duration metric: took 14.873616788s to StartCluster
	I0916 10:38:57.464458   22121 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:57.464557   22121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:57.465226   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:57.465443   22121 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:57.465469   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:38:57.465470   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:38:57.465485   22121 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:38:57.465569   22121 addons.go:69] Setting storage-provisioner=true in profile "ha-244475"
	I0916 10:38:57.465585   22121 addons.go:69] Setting default-storageclass=true in profile "ha-244475"
	I0916 10:38:57.465603   22121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-244475"
	I0916 10:38:57.465609   22121 addons.go:234] Setting addon storage-provisioner=true in "ha-244475"
	I0916 10:38:57.465634   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:38:57.465683   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:57.466032   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.466071   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.466075   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.466116   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.481103   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0916 10:38:57.481138   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0916 10:38:57.481582   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.481618   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.482091   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.482118   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.482234   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.482258   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.482437   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.482607   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.482769   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.483070   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.483111   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.484929   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:57.485193   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:38:57.485590   22121 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:38:57.485818   22121 addons.go:234] Setting addon default-storageclass=true in "ha-244475"
	I0916 10:38:57.485861   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:38:57.486134   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.486172   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.498299   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0916 10:38:57.498828   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.499447   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.499474   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.499850   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.500054   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.500552   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0916 10:38:57.500918   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.501427   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.501446   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.501839   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:57.501908   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.502610   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.502657   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.503651   22121 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:38:57.504966   22121 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:38:57.504987   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:38:57.505003   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:57.508156   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.508589   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:57.508615   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.508829   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:57.508992   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:57.509171   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:57.509294   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:57.518682   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0916 10:38:57.519147   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.519675   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.519702   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.520007   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.520169   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.521733   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:57.521948   22121 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:38:57.521971   22121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:38:57.521995   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:57.524943   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.525414   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:57.525441   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.525578   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:57.525724   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:57.525845   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:57.525926   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:57.660884   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:38:57.725204   22121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:38:57.781501   22121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:38:58.313582   22121 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:38:58.587280   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587305   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587383   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587408   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587584   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587596   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587649   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587677   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587686   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587689   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587706   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587679   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587713   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587722   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587906   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587935   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587948   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587979   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.588055   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.588073   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.588171   22121 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:38:58.588199   22121 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:38:58.588294   22121 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:38:58.588300   22121 round_trippers.go:469] Request Headers:
	I0916 10:38:58.588310   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.588315   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.605995   22121 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0916 10:38:58.606551   22121 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:38:58.606569   22121 round_trippers.go:469] Request Headers:
	I0916 10:38:58.606579   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.606584   22121 round_trippers.go:473]     Content-Type: application/json
	I0916 10:38:58.606587   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.610730   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:58.610908   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.610929   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.611167   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.611207   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.611219   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.612831   22121 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:38:58.614176   22121 addons.go:510] duration metric: took 1.1486947s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:38:58.614214   22121 start.go:246] waiting for cluster config update ...
	I0916 10:38:58.614228   22121 start.go:255] writing updated cluster config ...
	I0916 10:38:58.615876   22121 out.go:201] 
	I0916 10:38:58.617218   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:58.617303   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:58.618897   22121 out.go:177] * Starting "ha-244475-m02" control-plane node in "ha-244475" cluster
	I0916 10:38:58.620429   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:58.620447   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:58.620539   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:58.620553   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:58.620632   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:58.620820   22121 start.go:360] acquireMachinesLock for ha-244475-m02: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:38:58.620867   22121 start.go:364] duration metric: took 27.412µs to acquireMachinesLock for "ha-244475-m02"
	I0916 10:38:58.620892   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:58.620984   22121 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 10:38:58.622503   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:38:58.622584   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:58.622615   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:58.638413   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0916 10:38:58.638950   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:58.639464   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:58.639492   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:58.639818   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:58.640042   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:38:58.640214   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:38:58.640380   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:38:58.640411   22121 client.go:168] LocalClient.Create starting
	I0916 10:38:58.640444   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:38:58.640482   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:58.640501   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:58.640575   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:38:58.640600   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:58.640616   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:58.640639   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:38:58.640650   22121 main.go:141] libmachine: (ha-244475-m02) Calling .PreCreateCheck
	I0916 10:38:58.640820   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:38:58.641229   22121 main.go:141] libmachine: Creating machine...
	I0916 10:38:58.641245   22121 main.go:141] libmachine: (ha-244475-m02) Calling .Create
	I0916 10:38:58.641375   22121 main.go:141] libmachine: (ha-244475-m02) Creating KVM machine...
	I0916 10:38:58.642569   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found existing default KVM network
	I0916 10:38:58.642747   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found existing private KVM network mk-ha-244475
	I0916 10:38:58.642926   22121 main.go:141] libmachine: (ha-244475-m02) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 ...
	I0916 10:38:58.642950   22121 main.go:141] libmachine: (ha-244475-m02) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:38:58.643021   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.642905   22483 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:58.643109   22121 main.go:141] libmachine: (ha-244475-m02) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:38:58.883746   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.883623   22483 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa...
	I0916 10:38:58.990233   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.990092   22483 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/ha-244475-m02.rawdisk...
	I0916 10:38:58.990284   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Writing magic tar header
	I0916 10:38:58.990302   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Writing SSH key tar header
	I0916 10:38:58.990319   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.990203   22483 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 ...
	I0916 10:38:58.990329   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 (perms=drwx------)
	I0916 10:38:58.990341   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02
	I0916 10:38:58.990351   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:38:58.990359   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:58.990365   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:38:58.990378   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:38:58.990388   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:38:58.990411   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:38:58.990419   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:38:58.990427   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home
	I0916 10:38:58.990435   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Skipping /home - not owner
	I0916 10:38:58.990446   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:38:58.990454   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:38:58.990465   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:38:58.990475   22121 main.go:141] libmachine: (ha-244475-m02) Creating domain...
	I0916 10:38:58.991326   22121 main.go:141] libmachine: (ha-244475-m02) define libvirt domain using xml: 
	I0916 10:38:58.991351   22121 main.go:141] libmachine: (ha-244475-m02) <domain type='kvm'>
	I0916 10:38:58.991380   22121 main.go:141] libmachine: (ha-244475-m02)   <name>ha-244475-m02</name>
	I0916 10:38:58.991401   22121 main.go:141] libmachine: (ha-244475-m02)   <memory unit='MiB'>2200</memory>
	I0916 10:38:58.991408   22121 main.go:141] libmachine: (ha-244475-m02)   <vcpu>2</vcpu>
	I0916 10:38:58.991417   22121 main.go:141] libmachine: (ha-244475-m02)   <features>
	I0916 10:38:58.991441   22121 main.go:141] libmachine: (ha-244475-m02)     <acpi/>
	I0916 10:38:58.991459   22121 main.go:141] libmachine: (ha-244475-m02)     <apic/>
	I0916 10:38:58.991465   22121 main.go:141] libmachine: (ha-244475-m02)     <pae/>
	I0916 10:38:58.991472   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991477   22121 main.go:141] libmachine: (ha-244475-m02)   </features>
	I0916 10:38:58.991482   22121 main.go:141] libmachine: (ha-244475-m02)   <cpu mode='host-passthrough'>
	I0916 10:38:58.991489   22121 main.go:141] libmachine: (ha-244475-m02)   
	I0916 10:38:58.991504   22121 main.go:141] libmachine: (ha-244475-m02)   </cpu>
	I0916 10:38:58.991512   22121 main.go:141] libmachine: (ha-244475-m02)   <os>
	I0916 10:38:58.991516   22121 main.go:141] libmachine: (ha-244475-m02)     <type>hvm</type>
	I0916 10:38:58.991523   22121 main.go:141] libmachine: (ha-244475-m02)     <boot dev='cdrom'/>
	I0916 10:38:58.991528   22121 main.go:141] libmachine: (ha-244475-m02)     <boot dev='hd'/>
	I0916 10:38:58.991535   22121 main.go:141] libmachine: (ha-244475-m02)     <bootmenu enable='no'/>
	I0916 10:38:58.991546   22121 main.go:141] libmachine: (ha-244475-m02)   </os>
	I0916 10:38:58.991554   22121 main.go:141] libmachine: (ha-244475-m02)   <devices>
	I0916 10:38:58.991559   22121 main.go:141] libmachine: (ha-244475-m02)     <disk type='file' device='cdrom'>
	I0916 10:38:58.991569   22121 main.go:141] libmachine: (ha-244475-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/boot2docker.iso'/>
	I0916 10:38:58.991574   22121 main.go:141] libmachine: (ha-244475-m02)       <target dev='hdc' bus='scsi'/>
	I0916 10:38:58.991581   22121 main.go:141] libmachine: (ha-244475-m02)       <readonly/>
	I0916 10:38:58.991585   22121 main.go:141] libmachine: (ha-244475-m02)     </disk>
	I0916 10:38:58.991590   22121 main.go:141] libmachine: (ha-244475-m02)     <disk type='file' device='disk'>
	I0916 10:38:58.991596   22121 main.go:141] libmachine: (ha-244475-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:38:58.991603   22121 main.go:141] libmachine: (ha-244475-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/ha-244475-m02.rawdisk'/>
	I0916 10:38:58.991611   22121 main.go:141] libmachine: (ha-244475-m02)       <target dev='hda' bus='virtio'/>
	I0916 10:38:58.991615   22121 main.go:141] libmachine: (ha-244475-m02)     </disk>
	I0916 10:38:58.991620   22121 main.go:141] libmachine: (ha-244475-m02)     <interface type='network'>
	I0916 10:38:58.991625   22121 main.go:141] libmachine: (ha-244475-m02)       <source network='mk-ha-244475'/>
	I0916 10:38:58.991630   22121 main.go:141] libmachine: (ha-244475-m02)       <model type='virtio'/>
	I0916 10:38:58.991637   22121 main.go:141] libmachine: (ha-244475-m02)     </interface>
	I0916 10:38:58.991643   22121 main.go:141] libmachine: (ha-244475-m02)     <interface type='network'>
	I0916 10:38:58.991649   22121 main.go:141] libmachine: (ha-244475-m02)       <source network='default'/>
	I0916 10:38:58.991655   22121 main.go:141] libmachine: (ha-244475-m02)       <model type='virtio'/>
	I0916 10:38:58.991658   22121 main.go:141] libmachine: (ha-244475-m02)     </interface>
	I0916 10:38:58.991663   22121 main.go:141] libmachine: (ha-244475-m02)     <serial type='pty'>
	I0916 10:38:58.991667   22121 main.go:141] libmachine: (ha-244475-m02)       <target port='0'/>
	I0916 10:38:58.991672   22121 main.go:141] libmachine: (ha-244475-m02)     </serial>
	I0916 10:38:58.991681   22121 main.go:141] libmachine: (ha-244475-m02)     <console type='pty'>
	I0916 10:38:58.991692   22121 main.go:141] libmachine: (ha-244475-m02)       <target type='serial' port='0'/>
	I0916 10:38:58.991703   22121 main.go:141] libmachine: (ha-244475-m02)     </console>
	I0916 10:38:58.991728   22121 main.go:141] libmachine: (ha-244475-m02)     <rng model='virtio'>
	I0916 10:38:58.991756   22121 main.go:141] libmachine: (ha-244475-m02)       <backend model='random'>/dev/random</backend>
	I0916 10:38:58.991766   22121 main.go:141] libmachine: (ha-244475-m02)     </rng>
	I0916 10:38:58.991772   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991779   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991792   22121 main.go:141] libmachine: (ha-244475-m02)   </devices>
	I0916 10:38:58.991801   22121 main.go:141] libmachine: (ha-244475-m02) </domain>
	I0916 10:38:58.991810   22121 main.go:141] libmachine: (ha-244475-m02) 
	I0916 10:38:58.998246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:b1:66:ac in network default
	I0916 10:38:58.998886   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring networks are active...
	I0916 10:38:58.998906   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:38:58.999650   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring network default is active
	I0916 10:38:59.000011   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring network mk-ha-244475 is active
	I0916 10:38:59.000423   22121 main.go:141] libmachine: (ha-244475-m02) Getting domain xml...
	I0916 10:38:59.001200   22121 main.go:141] libmachine: (ha-244475-m02) Creating domain...
	I0916 10:39:00.217897   22121 main.go:141] libmachine: (ha-244475-m02) Waiting to get IP...
	I0916 10:39:00.218668   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.219076   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.219122   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.219065   22483 retry.go:31] will retry after 199.814892ms: waiting for machine to come up
	I0916 10:39:00.420559   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.421001   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.421022   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.420966   22483 retry.go:31] will retry after 240.671684ms: waiting for machine to come up
	I0916 10:39:00.663384   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.663824   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.663846   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.663767   22483 retry.go:31] will retry after 337.97981ms: waiting for machine to come up
	I0916 10:39:01.003494   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:01.003942   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:01.003971   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:01.003897   22483 retry.go:31] will retry after 519.568797ms: waiting for machine to come up
	I0916 10:39:01.524619   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:01.525114   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:01.525169   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:01.525043   22483 retry.go:31] will retry after 742.703365ms: waiting for machine to come up
	I0916 10:39:02.268894   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:02.269275   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:02.269302   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:02.269246   22483 retry.go:31] will retry after 918.427714ms: waiting for machine to come up
	I0916 10:39:03.189424   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:03.189835   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:03.189858   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:03.189810   22483 retry.go:31] will retry after 1.026136416s: waiting for machine to come up
	I0916 10:39:04.217246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:04.217734   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:04.217759   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:04.217669   22483 retry.go:31] will retry after 1.280806759s: waiting for machine to come up
	I0916 10:39:05.500057   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:05.500485   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:05.500513   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:05.500426   22483 retry.go:31] will retry after 1.764059222s: waiting for machine to come up
	I0916 10:39:07.266224   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:07.266648   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:07.266668   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:07.266605   22483 retry.go:31] will retry after 1.834210088s: waiting for machine to come up
	I0916 10:39:09.102726   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:09.103221   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:09.103251   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:09.103165   22483 retry.go:31] will retry after 2.739410036s: waiting for machine to come up
	I0916 10:39:11.846017   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:11.846530   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:11.846564   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:11.846474   22483 retry.go:31] will retry after 2.779311539s: waiting for machine to come up
	I0916 10:39:14.627940   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:14.628351   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:14.628379   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:14.628315   22483 retry.go:31] will retry after 2.793801544s: waiting for machine to come up
	I0916 10:39:17.425154   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:17.425563   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:17.425580   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:17.425530   22483 retry.go:31] will retry after 3.470690334s: waiting for machine to come up
	I0916 10:39:20.899627   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.900073   22121 main.go:141] libmachine: (ha-244475-m02) Found IP for machine: 192.168.39.222
	I0916 10:39:20.900093   22121 main.go:141] libmachine: (ha-244475-m02) Reserving static IP address...
	I0916 10:39:20.900106   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.900473   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find host DHCP lease matching {name: "ha-244475-m02", mac: "52:54:00:ed:fc:95", ip: "192.168.39.222"} in network mk-ha-244475
	I0916 10:39:20.972758   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Getting to WaitForSSH function...
	I0916 10:39:20.972786   22121 main.go:141] libmachine: (ha-244475-m02) Reserved static IP address: 192.168.39.222
	I0916 10:39:20.972795   22121 main.go:141] libmachine: (ha-244475-m02) Waiting for SSH to be available...
	I0916 10:39:20.975117   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.975582   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:20.975610   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.975773   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using SSH client type: external
	I0916 10:39:20.975792   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa (-rw-------)
	I0916 10:39:20.975827   22121 main.go:141] libmachine: (ha-244475-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:39:20.975839   22121 main.go:141] libmachine: (ha-244475-m02) DBG | About to run SSH command:
	I0916 10:39:20.975859   22121 main.go:141] libmachine: (ha-244475-m02) DBG | exit 0
	I0916 10:39:21.101388   22121 main.go:141] libmachine: (ha-244475-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 10:39:21.101625   22121 main.go:141] libmachine: (ha-244475-m02) KVM machine creation complete!
	I0916 10:39:21.101972   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:39:21.102551   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:21.102707   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:21.102833   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:39:21.102843   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:39:21.103989   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:39:21.104000   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:39:21.104005   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:39:21.104010   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.106164   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.106508   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.106551   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.106712   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.106893   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.107044   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.107170   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.107317   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.107566   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.107579   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:39:21.208324   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:39:21.208347   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:39:21.208354   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.211146   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.211537   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.211559   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.211725   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.211895   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.212034   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.212154   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.212326   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.212516   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.212530   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:39:21.313838   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:39:21.313941   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:39:21.313956   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:39:21.313968   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.314202   22121 buildroot.go:166] provisioning hostname "ha-244475-m02"
	I0916 10:39:21.314225   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.314348   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.316988   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.317383   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.317407   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.317573   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.317722   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.317830   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.317925   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.318068   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.318243   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.318255   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475-m02 && echo "ha-244475-m02" | sudo tee /etc/hostname
	I0916 10:39:21.435511   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475-m02
	
	I0916 10:39:21.435550   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.438718   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.439163   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.439205   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.439382   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.439582   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.439737   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.439947   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.440129   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.440341   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.440367   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:39:21.550458   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:39:21.550490   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:39:21.550529   22121 buildroot.go:174] setting up certificates
	I0916 10:39:21.550538   22121 provision.go:84] configureAuth start
	I0916 10:39:21.550547   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.550825   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:21.553187   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.553518   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.553543   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.553719   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.555867   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.556227   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.556254   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.556377   22121 provision.go:143] copyHostCerts
	I0916 10:39:21.556404   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:39:21.556435   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:39:21.556445   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:39:21.556501   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:39:21.557003   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:39:21.557062   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:39:21.557069   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:39:21.557114   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:39:21.557194   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:39:21.557215   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:39:21.557221   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:39:21.557251   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:39:21.557313   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475-m02 san=[127.0.0.1 192.168.39.222 ha-244475-m02 localhost minikube]
	I0916 10:39:21.676307   22121 provision.go:177] copyRemoteCerts
	I0916 10:39:21.676359   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:39:21.676383   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.679208   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.679543   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.679570   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.679736   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.679929   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.680073   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.680198   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:21.759911   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:39:21.759973   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:39:21.784754   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:39:21.784831   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:39:21.808848   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:39:21.808934   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:39:21.832713   22121 provision.go:87] duration metric: took 282.161069ms to configureAuth
	I0916 10:39:21.832745   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:39:21.832966   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:21.833035   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.835844   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.836194   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.836220   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.836405   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.836587   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.836747   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.836869   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.836973   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.837163   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.837187   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:39:22.055982   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:39:22.056004   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:39:22.056012   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetURL
	I0916 10:39:22.057317   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using libvirt version 6000000
	I0916 10:39:22.059932   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.060270   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.060291   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.060472   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:39:22.060481   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:39:22.060487   22121 client.go:171] duration metric: took 23.42006819s to LocalClient.Create
	I0916 10:39:22.060508   22121 start.go:167] duration metric: took 23.420129046s to libmachine.API.Create "ha-244475"
	I0916 10:39:22.060521   22121 start.go:293] postStartSetup for "ha-244475-m02" (driver="kvm2")
	I0916 10:39:22.060537   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:39:22.060553   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.060804   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:39:22.060831   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.062903   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.063181   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.063208   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.063341   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.063491   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.063705   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.063813   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.145615   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:39:22.150644   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:39:22.150671   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:39:22.150732   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:39:22.150808   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:39:22.150817   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:39:22.150906   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:39:22.162177   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:39:22.188876   22121 start.go:296] duration metric: took 128.339893ms for postStartSetup
	I0916 10:39:22.188928   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:39:22.189609   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:22.191896   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.192212   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.192246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.192461   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:39:22.192662   22121 start.go:128] duration metric: took 23.571667259s to createHost
	I0916 10:39:22.192687   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.194553   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.194806   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.194832   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.194956   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.195125   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.195252   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.195352   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.195512   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:22.195697   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:22.195714   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:39:22.298260   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483162.257238661
	
	I0916 10:39:22.298294   22121 fix.go:216] guest clock: 1726483162.257238661
	I0916 10:39:22.298303   22121 fix.go:229] Guest: 2024-09-16 10:39:22.257238661 +0000 UTC Remote: 2024-09-16 10:39:22.192675095 +0000 UTC m=+70.025440848 (delta=64.563566ms)
	I0916 10:39:22.298325   22121 fix.go:200] guest clock delta is within tolerance: 64.563566ms
	I0916 10:39:22.298332   22121 start.go:83] releasing machines lock for "ha-244475-m02", held for 23.677456654s
	I0916 10:39:22.298361   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.298605   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:22.301224   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.301602   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.301623   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.303467   22121 out.go:177] * Found network options:
	I0916 10:39:22.304869   22121 out.go:177]   - NO_PROXY=192.168.39.19
	W0916 10:39:22.306210   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:39:22.306239   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.306761   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.306940   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.307022   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:39:22.307050   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	W0916 10:39:22.307076   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:39:22.307148   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:39:22.307170   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.309796   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.309995   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310175   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.310201   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310319   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.310427   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.310453   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310476   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.310594   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.310660   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.310713   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.310788   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.310823   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.310950   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.543814   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:39:22.550133   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:39:22.550202   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:39:22.567275   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:39:22.567305   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:39:22.567376   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:39:22.584656   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:39:22.599498   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:39:22.599566   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:39:22.614104   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:39:22.628372   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:39:22.744286   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:39:22.898472   22121 docker.go:233] disabling docker service ...
	I0916 10:39:22.898553   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:39:22.913618   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:39:22.927202   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:39:23.051522   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:39:23.182181   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:39:23.204179   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:39:23.225362   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:39:23.225448   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.237074   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:39:23.237150   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.247895   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.258393   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.269419   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:39:23.279779   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.291172   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.311053   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.322116   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:39:23.332200   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:39:23.332250   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:39:23.344994   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:39:23.355782   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:23.481218   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:39:23.579230   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:39:23.579298   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:39:23.584697   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:39:23.584741   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:39:23.588596   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:39:23.641205   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:39:23.641281   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:39:23.671177   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:39:23.702253   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:39:23.703479   22121 out.go:177]   - env NO_PROXY=192.168.39.19
	I0916 10:39:23.704928   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:23.707459   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:23.707795   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:23.707824   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:23.708043   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:39:23.712363   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:39:23.725265   22121 mustload.go:65] Loading cluster: ha-244475
	I0916 10:39:23.725441   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:23.725687   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:23.725721   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:23.740417   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0916 10:39:23.740990   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:23.741466   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:23.741488   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:23.741810   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:23.742008   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:39:23.743510   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:39:23.743856   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:23.743896   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:23.759264   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I0916 10:39:23.759649   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:23.760026   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:23.760042   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:23.760318   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:23.760486   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:39:23.760651   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.222
	I0916 10:39:23.760665   22121 certs.go:194] generating shared ca certs ...
	I0916 10:39:23.760682   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.760796   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:39:23.760834   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:39:23.760847   22121 certs.go:256] generating profile certs ...
	I0916 10:39:23.760915   22121 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:39:23.760938   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a
	I0916 10:39:23.760949   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.254]
	I0916 10:39:23.971738   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a ...
	I0916 10:39:23.971765   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a: {Name:mk37a27280aa796084417d4aec0944fb7177392b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.971967   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a ...
	I0916 10:39:23.971985   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a: {Name:mkb5d769612983e338b6def0cc291fa133a3ff90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.972081   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:39:23.972210   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:39:23.972334   22121 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:39:23.972348   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:39:23.972360   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:39:23.972373   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:39:23.972388   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:39:23.972400   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:39:23.972412   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:39:23.972424   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:39:23.972437   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:39:23.972477   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:39:23.972504   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:39:23.972513   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:39:23.972536   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:39:23.972556   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:39:23.972577   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:39:23.972612   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:39:23.972638   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:39:23.972651   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:39:23.972663   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:23.972694   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:39:23.975828   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:23.976221   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:39:23.976248   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:23.976413   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:39:23.976620   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:39:23.976774   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:39:23.976882   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:39:24.053497   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:39:24.058424   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:39:24.070223   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:39:24.074933   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 10:39:24.085348   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:39:24.089709   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:39:24.102091   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:39:24.106076   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:39:24.123270   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:39:24.127635   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:39:24.138409   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:39:24.142528   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:39:24.158176   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:39:24.183770   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:39:24.210708   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:39:24.237895   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:39:24.265068   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:39:24.289021   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:39:24.312480   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:39:24.336502   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:39:24.360309   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:39:24.383990   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:39:24.408205   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:39:24.432243   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:39:24.449793   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 10:39:24.467290   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:39:24.484273   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:39:24.501648   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:39:24.519020   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:39:24.535943   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:39:24.552390   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:39:24.558138   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:39:24.568860   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.574154   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.574204   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.580119   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:39:24.592339   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:39:24.604511   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.609097   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.609171   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.615026   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:39:24.625768   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:39:24.636379   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.640871   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.640920   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.646395   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:39:24.656801   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:39:24.661571   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:39:24.661615   22121 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0916 10:39:24.661689   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:39:24.661712   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:39:24.661745   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:39:24.679303   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:39:24.679364   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:39:24.679410   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:39:24.689055   22121 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:39:24.689100   22121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:39:24.698937   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:39:24.698963   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:39:24.699025   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:39:24.699054   22121 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 10:39:24.699062   22121 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 10:39:24.703600   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 10:39:24.703633   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:39:25.360517   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:39:25.360604   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:39:25.365737   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 10:39:25.365769   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:39:25.520604   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:39:25.561216   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:39:25.561328   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:39:25.578620   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 10:39:25.578664   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:39:25.943225   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:39:25.953425   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:39:25.971005   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:39:25.987923   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:39:26.005037   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:39:26.008989   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:39:26.022651   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:26.139506   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:39:26.156924   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:39:26.157320   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:26.157358   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:26.173843   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41439
	I0916 10:39:26.174382   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:26.174982   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:26.175008   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:26.175329   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:26.175507   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:39:26.175651   22121 start.go:317] joinCluster: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:39:26.175759   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:39:26.175773   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:39:26.178960   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:26.179415   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:39:26.179439   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:26.179692   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:39:26.179878   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:39:26.180020   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:39:26.180170   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:39:26.331689   22121 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:39:26.331744   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvzo4h.p3o4vz89426q0tzd --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0916 10:39:46.581278   22121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvzo4h.p3o4vz89426q0tzd --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (20.249509056s)
	I0916 10:39:46.581311   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:39:47.185857   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475-m02 minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=false
	I0916 10:39:47.323615   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-244475-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:39:47.452689   22121 start.go:319] duration metric: took 21.277032539s to joinCluster
	I0916 10:39:47.452767   22121 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:39:47.453074   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:47.454538   22121 out.go:177] * Verifying Kubernetes components...
	I0916 10:39:47.455883   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:47.719826   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:39:47.771692   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:39:47.771937   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:39:47.771997   22121 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I0916 10:39:47.772181   22121 node_ready.go:35] waiting up to 6m0s for node "ha-244475-m02" to be "Ready" ...
	I0916 10:39:47.772291   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:47.772301   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:47.772311   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:47.772317   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:47.784039   22121 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0916 10:39:48.272953   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:48.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:48.272981   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:48.272992   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:48.276331   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:48.772467   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:48.772487   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:48.772495   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:48.772499   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:48.778807   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:39:49.272650   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:49.272673   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:49.272683   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:49.272688   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:49.277698   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:49.773047   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:49.773069   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:49.773079   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:49.773085   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:49.909815   22121 round_trippers.go:574] Response Status: 200 OK in 136 milliseconds
	I0916 10:39:49.910692   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:50.272950   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:50.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:50.272982   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:50.272987   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:50.277990   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:50.773159   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:50.773185   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:50.773196   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:50.773202   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:50.777386   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:51.273263   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:51.273286   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:51.273294   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:51.273300   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:51.277667   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:51.772471   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:51.772493   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:51.772502   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:51.772508   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:51.775526   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:52.272463   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:52.272487   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:52.272504   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:52.272510   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:52.276001   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:52.276862   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:52.772568   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:52.772591   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:52.772598   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:52.772603   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:52.775666   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:53.272574   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:53.272605   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:53.272614   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:53.272617   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:53.275866   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:53.773034   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:53.773057   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:53.773065   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:53.773069   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:53.910868   22121 round_trippers.go:574] Response Status: 200 OK in 137 milliseconds
	I0916 10:39:54.272908   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:54.272929   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:54.272937   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:54.272940   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:54.276365   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:54.276998   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:54.772373   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:54.772404   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:54.772412   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:54.772415   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:54.775406   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:55.272580   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:55.272602   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:55.272610   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:55.272614   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:55.275678   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:55.772739   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:55.772762   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:55.772769   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:55.772773   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:55.776656   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.273183   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:56.273204   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:56.273211   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:56.273216   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:56.276356   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.773388   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:56.773413   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:56.773426   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:56.773433   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:56.776782   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.777386   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:57.272950   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:57.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:57.272979   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:57.272984   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:57.276364   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:57.773060   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:57.773081   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:57.773088   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:57.773092   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:57.776229   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:58.273206   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:58.273236   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:58.273248   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:58.273255   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:58.277169   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:58.773306   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:58.773325   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:58.773333   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:58.773336   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:58.776530   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:59.272613   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:59.272637   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:59.272647   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:59.272653   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:59.277029   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:59.277431   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:59.772793   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:59.772817   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:59.772825   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:59.772829   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:59.776206   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:00.273273   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:00.273295   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:00.273308   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:00.273314   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:00.276740   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:00.772818   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:00.772841   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:00.772851   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:00.772857   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:00.776328   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:01.273273   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:01.273295   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:01.273304   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:01.273307   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:01.276670   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:01.772774   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:01.772805   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:01.772817   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:01.772824   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:01.777379   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:01.777815   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:40:02.273195   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:02.273218   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:02.273226   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:02.273231   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:02.276605   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:02.773027   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:02.773049   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:02.773057   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:02.773062   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:02.776120   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:03.273168   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:03.273191   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:03.273199   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:03.273206   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:03.276412   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:03.773044   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:03.773066   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:03.773074   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:03.773079   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:03.776511   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:04.272779   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:04.272803   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:04.272810   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:04.272814   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:04.276171   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:04.276879   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:40:04.773259   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:04.773284   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:04.773291   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:04.773295   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:04.776687   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.272635   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.272667   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.272678   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.272687   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.275813   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.772434   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.772459   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.772469   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.772474   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.776455   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.777067   22121 node_ready.go:49] node "ha-244475-m02" has status "Ready":"True"
	I0916 10:40:05.777086   22121 node_ready.go:38] duration metric: took 18.004873295s for node "ha-244475-m02" to be "Ready" ...
	I0916 10:40:05.777095   22121 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:40:05.777206   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:05.777219   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.777229   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.777240   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.781640   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:05.787776   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.787847   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lzrg2
	I0916 10:40:05.787856   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.787863   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.787867   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.791078   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.791756   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.791771   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.791778   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.791784   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.794551   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.795202   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.795218   22121 pod_ready.go:82] duration metric: took 7.419929ms for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.795226   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.795282   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-m8fd7
	I0916 10:40:05.795290   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.795297   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.795302   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.798095   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.798774   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.798790   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.798797   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.798801   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.801421   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.801924   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.801938   22121 pod_ready.go:82] duration metric: took 6.704952ms for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.801945   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.801989   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475
	I0916 10:40:05.801997   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.802004   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.802008   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.804181   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.804710   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.804724   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.804730   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.804733   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.807387   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.808293   22121 pod_ready.go:93] pod "etcd-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.808307   22121 pod_ready.go:82] duration metric: took 6.357107ms for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.808315   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.808358   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m02
	I0916 10:40:05.808365   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.808372   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.808377   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.810955   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.811488   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.811500   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.811508   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.811512   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.814011   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.814463   22121 pod_ready.go:93] pod "etcd-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.814477   22121 pod_ready.go:82] duration metric: took 6.157572ms for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.814489   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.972835   22121 request.go:632] Waited for 158.29387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:40:05.972902   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:40:05.972922   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.972933   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.972943   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.976765   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.172937   22121 request.go:632] Waited for 195.355279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.172986   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.172992   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.172998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.173002   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.177033   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:06.177621   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.177640   22121 pod_ready.go:82] duration metric: took 363.14475ms for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.177648   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.373192   22121 request.go:632] Waited for 195.483207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:40:06.373244   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:40:06.373249   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.373257   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.373261   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.377043   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.573053   22121 request.go:632] Waited for 195.35028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:06.573108   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:06.573115   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.573136   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.573147   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.577118   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.577677   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.577694   22121 pod_ready.go:82] duration metric: took 400.039517ms for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.577703   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.772876   22121 request.go:632] Waited for 195.103028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:40:06.772951   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:40:06.772956   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.772964   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.772969   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.776182   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.973323   22121 request.go:632] Waited for 196.373099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.973376   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.973381   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.973387   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.973392   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.976489   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.977163   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.977180   22121 pod_ready.go:82] duration metric: took 399.471495ms for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.977190   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.173212   22121 request.go:632] Waited for 195.956208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:40:07.173293   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:40:07.173301   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.173312   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.173319   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.177006   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.373012   22121 request.go:632] Waited for 195.452852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:07.373136   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:07.373147   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.373157   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.373166   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.376520   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.376939   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:07.376955   22121 pod_ready.go:82] duration metric: took 399.760125ms for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.376963   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.573324   22121 request.go:632] Waited for 196.271916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:40:07.573394   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:40:07.573402   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.573413   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.573420   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.577193   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.773425   22121 request.go:632] Waited for 195.35678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:07.773476   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:07.773482   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.773488   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.773492   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.776987   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.777804   22121 pod_ready.go:93] pod "kube-proxy-crttt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:07.777823   22121 pod_ready.go:82] duration metric: took 400.853941ms for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.777832   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.972928   22121 request.go:632] Waited for 195.015591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:40:07.972986   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:40:07.972991   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.972998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.973004   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.976127   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.173342   22121 request.go:632] Waited for 196.327773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.173412   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.173420   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.173427   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.173433   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.177112   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.177778   22121 pod_ready.go:93] pod "kube-proxy-t454b" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.177799   22121 pod_ready.go:82] duration metric: took 399.960678ms for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.177812   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.372853   22121 request.go:632] Waited for 194.970978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:40:08.372917   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:40:08.372922   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.372929   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.372936   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.375975   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.572928   22121 request.go:632] Waited for 196.373637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:08.572977   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:08.572982   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.572989   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.572993   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.576124   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.576671   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.576689   22121 pod_ready.go:82] duration metric: took 398.869844ms for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.576697   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.773179   22121 request.go:632] Waited for 196.418181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:40:08.773233   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:40:08.773253   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.773265   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.773280   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.776328   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.973400   22121 request.go:632] Waited for 196.398623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.973450   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.973455   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.973462   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.973468   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.977143   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.977768   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.977788   22121 pod_ready.go:82] duration metric: took 401.084234ms for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.977801   22121 pod_ready.go:39] duration metric: took 3.200692542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:40:08.977817   22121 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:40:08.977871   22121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:40:09.001036   22121 api_server.go:72] duration metric: took 21.548229005s to wait for apiserver process to appear ...
	I0916 10:40:09.001060   22121 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:40:09.001082   22121 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0916 10:40:09.007410   22121 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0916 10:40:09.007485   22121 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I0916 10:40:09.007496   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.007508   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.007518   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.008301   22121 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:40:09.008412   22121 api_server.go:141] control plane version: v1.31.1
	I0916 10:40:09.008429   22121 api_server.go:131] duration metric: took 7.361874ms to wait for apiserver health ...
	I0916 10:40:09.008439   22121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:40:09.172861   22121 request.go:632] Waited for 164.349636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.172946   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.172952   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.172965   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.172969   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.177801   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.182059   22121 system_pods.go:59] 17 kube-system pods found
	I0916 10:40:09.182087   22121 system_pods.go:61] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:40:09.182142   22121 system_pods.go:61] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:40:09.182160   22121 system_pods.go:61] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:40:09.182173   22121 system_pods.go:61] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:40:09.182179   22121 system_pods.go:61] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:40:09.182183   22121 system_pods.go:61] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:40:09.182187   22121 system_pods.go:61] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:40:09.182191   22121 system_pods.go:61] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:40:09.182195   22121 system_pods.go:61] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:40:09.182198   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:40:09.182201   22121 system_pods.go:61] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:40:09.182205   22121 system_pods.go:61] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:40:09.182210   22121 system_pods.go:61] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:40:09.182214   22121 system_pods.go:61] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:40:09.182217   22121 system_pods.go:61] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:40:09.182221   22121 system_pods.go:61] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:40:09.182228   22121 system_pods.go:61] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:40:09.182236   22121 system_pods.go:74] duration metric: took 173.790059ms to wait for pod list to return data ...
	I0916 10:40:09.182248   22121 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:40:09.372607   22121 request.go:632] Waited for 190.269868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:40:09.372663   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:40:09.372669   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.372683   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.372701   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.377213   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.377421   22121 default_sa.go:45] found service account: "default"
	I0916 10:40:09.377440   22121 default_sa.go:55] duration metric: took 195.180856ms for default service account to be created ...
	I0916 10:40:09.377449   22121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:40:09.572867   22121 request.go:632] Waited for 195.351388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.572951   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.572958   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.572968   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.572975   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.577144   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.582372   22121 system_pods.go:86] 17 kube-system pods found
	I0916 10:40:09.582396   22121 system_pods.go:89] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:40:09.582401   22121 system_pods.go:89] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:40:09.582405   22121 system_pods.go:89] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:40:09.582409   22121 system_pods.go:89] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:40:09.582413   22121 system_pods.go:89] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:40:09.582417   22121 system_pods.go:89] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:40:09.582420   22121 system_pods.go:89] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:40:09.582423   22121 system_pods.go:89] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:40:09.582427   22121 system_pods.go:89] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:40:09.582430   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:40:09.582433   22121 system_pods.go:89] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:40:09.582436   22121 system_pods.go:89] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:40:09.582439   22121 system_pods.go:89] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:40:09.582442   22121 system_pods.go:89] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:40:09.582445   22121 system_pods.go:89] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:40:09.582448   22121 system_pods.go:89] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:40:09.582452   22121 system_pods.go:89] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:40:09.582457   22121 system_pods.go:126] duration metric: took 205.002675ms to wait for k8s-apps to be running ...
	I0916 10:40:09.582465   22121 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:40:09.582506   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:09.597644   22121 system_svc.go:56] duration metric: took 15.160872ms WaitForService to wait for kubelet
	I0916 10:40:09.597677   22121 kubeadm.go:582] duration metric: took 22.144873804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:40:09.597698   22121 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:40:09.773108   22121 request.go:632] Waited for 175.336097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I0916 10:40:09.773176   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I0916 10:40:09.773183   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.773190   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.773195   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.776708   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:09.777452   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:40:09.777477   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:40:09.777490   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:40:09.777495   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:40:09.777501   22121 node_conditions.go:105] duration metric: took 179.797275ms to run NodePressure ...
	I0916 10:40:09.777515   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:40:09.777580   22121 start.go:255] writing updated cluster config ...
	I0916 10:40:09.779808   22121 out.go:201] 
	I0916 10:40:09.781239   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:09.781337   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:09.782835   22121 out.go:177] * Starting "ha-244475-m03" control-plane node in "ha-244475" cluster
	I0916 10:40:09.783977   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:40:09.783994   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:09.784082   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:40:09.784094   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:40:09.784186   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:09.784355   22121 start.go:360] acquireMachinesLock for ha-244475-m03: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:09.784415   22121 start.go:364] duration metric: took 40.424µs to acquireMachinesLock for "ha-244475-m03"
	I0916 10:40:09.784439   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:40:09.784543   22121 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 10:40:09.786219   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:40:09.786291   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:09.786324   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:09.801282   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35165
	I0916 10:40:09.801761   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:09.802231   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:09.802254   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:09.802548   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:09.802764   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:09.802865   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:09.802989   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:40:09.803017   22121 client.go:168] LocalClient.Create starting
	I0916 10:40:09.803051   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:40:09.803091   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:09.803118   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:09.803183   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:40:09.803210   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:09.803224   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:09.803249   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:40:09.803261   22121 main.go:141] libmachine: (ha-244475-m03) Calling .PreCreateCheck
	I0916 10:40:09.803404   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:09.803766   22121 main.go:141] libmachine: Creating machine...
	I0916 10:40:09.803781   22121 main.go:141] libmachine: (ha-244475-m03) Calling .Create
	I0916 10:40:09.803937   22121 main.go:141] libmachine: (ha-244475-m03) Creating KVM machine...
	I0916 10:40:09.805160   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found existing default KVM network
	I0916 10:40:09.805337   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found existing private KVM network mk-ha-244475
	I0916 10:40:09.805472   22121 main.go:141] libmachine: (ha-244475-m03) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 ...
	I0916 10:40:09.805493   22121 main.go:141] libmachine: (ha-244475-m03) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:40:09.805577   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:09.805472   22888 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:40:09.805636   22121 main.go:141] libmachine: (ha-244475-m03) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:40:10.039594   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.039469   22888 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa...
	I0916 10:40:10.482395   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.482296   22888 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/ha-244475-m03.rawdisk...
	I0916 10:40:10.482425   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Writing magic tar header
	I0916 10:40:10.482435   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Writing SSH key tar header
	I0916 10:40:10.482442   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.482411   22888 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 ...
	I0916 10:40:10.482520   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03
	I0916 10:40:10.482539   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 (perms=drwx------)
	I0916 10:40:10.482546   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:40:10.482562   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:40:10.482573   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:40:10.482582   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:40:10.482591   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:40:10.482605   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:40:10.482619   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:40:10.482631   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:40:10.482639   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:40:10.482649   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home
	I0916 10:40:10.482658   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:40:10.482668   22121 main.go:141] libmachine: (ha-244475-m03) Creating domain...
	I0916 10:40:10.482675   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Skipping /home - not owner
	I0916 10:40:10.483703   22121 main.go:141] libmachine: (ha-244475-m03) define libvirt domain using xml: 
	I0916 10:40:10.483728   22121 main.go:141] libmachine: (ha-244475-m03) <domain type='kvm'>
	I0916 10:40:10.483739   22121 main.go:141] libmachine: (ha-244475-m03)   <name>ha-244475-m03</name>
	I0916 10:40:10.483746   22121 main.go:141] libmachine: (ha-244475-m03)   <memory unit='MiB'>2200</memory>
	I0916 10:40:10.483755   22121 main.go:141] libmachine: (ha-244475-m03)   <vcpu>2</vcpu>
	I0916 10:40:10.483762   22121 main.go:141] libmachine: (ha-244475-m03)   <features>
	I0916 10:40:10.483767   22121 main.go:141] libmachine: (ha-244475-m03)     <acpi/>
	I0916 10:40:10.483774   22121 main.go:141] libmachine: (ha-244475-m03)     <apic/>
	I0916 10:40:10.483780   22121 main.go:141] libmachine: (ha-244475-m03)     <pae/>
	I0916 10:40:10.483786   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.483791   22121 main.go:141] libmachine: (ha-244475-m03)   </features>
	I0916 10:40:10.483799   22121 main.go:141] libmachine: (ha-244475-m03)   <cpu mode='host-passthrough'>
	I0916 10:40:10.483821   22121 main.go:141] libmachine: (ha-244475-m03)   
	I0916 10:40:10.483839   22121 main.go:141] libmachine: (ha-244475-m03)   </cpu>
	I0916 10:40:10.483851   22121 main.go:141] libmachine: (ha-244475-m03)   <os>
	I0916 10:40:10.483859   22121 main.go:141] libmachine: (ha-244475-m03)     <type>hvm</type>
	I0916 10:40:10.483867   22121 main.go:141] libmachine: (ha-244475-m03)     <boot dev='cdrom'/>
	I0916 10:40:10.483882   22121 main.go:141] libmachine: (ha-244475-m03)     <boot dev='hd'/>
	I0916 10:40:10.483893   22121 main.go:141] libmachine: (ha-244475-m03)     <bootmenu enable='no'/>
	I0916 10:40:10.483900   22121 main.go:141] libmachine: (ha-244475-m03)   </os>
	I0916 10:40:10.483911   22121 main.go:141] libmachine: (ha-244475-m03)   <devices>
	I0916 10:40:10.483918   22121 main.go:141] libmachine: (ha-244475-m03)     <disk type='file' device='cdrom'>
	I0916 10:40:10.483926   22121 main.go:141] libmachine: (ha-244475-m03)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/boot2docker.iso'/>
	I0916 10:40:10.483933   22121 main.go:141] libmachine: (ha-244475-m03)       <target dev='hdc' bus='scsi'/>
	I0916 10:40:10.483938   22121 main.go:141] libmachine: (ha-244475-m03)       <readonly/>
	I0916 10:40:10.483942   22121 main.go:141] libmachine: (ha-244475-m03)     </disk>
	I0916 10:40:10.483948   22121 main.go:141] libmachine: (ha-244475-m03)     <disk type='file' device='disk'>
	I0916 10:40:10.483956   22121 main.go:141] libmachine: (ha-244475-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:40:10.483963   22121 main.go:141] libmachine: (ha-244475-m03)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/ha-244475-m03.rawdisk'/>
	I0916 10:40:10.483975   22121 main.go:141] libmachine: (ha-244475-m03)       <target dev='hda' bus='virtio'/>
	I0916 10:40:10.483985   22121 main.go:141] libmachine: (ha-244475-m03)     </disk>
	I0916 10:40:10.483992   22121 main.go:141] libmachine: (ha-244475-m03)     <interface type='network'>
	I0916 10:40:10.484004   22121 main.go:141] libmachine: (ha-244475-m03)       <source network='mk-ha-244475'/>
	I0916 10:40:10.484015   22121 main.go:141] libmachine: (ha-244475-m03)       <model type='virtio'/>
	I0916 10:40:10.484023   22121 main.go:141] libmachine: (ha-244475-m03)     </interface>
	I0916 10:40:10.484028   22121 main.go:141] libmachine: (ha-244475-m03)     <interface type='network'>
	I0916 10:40:10.484035   22121 main.go:141] libmachine: (ha-244475-m03)       <source network='default'/>
	I0916 10:40:10.484040   22121 main.go:141] libmachine: (ha-244475-m03)       <model type='virtio'/>
	I0916 10:40:10.484046   22121 main.go:141] libmachine: (ha-244475-m03)     </interface>
	I0916 10:40:10.484052   22121 main.go:141] libmachine: (ha-244475-m03)     <serial type='pty'>
	I0916 10:40:10.484059   22121 main.go:141] libmachine: (ha-244475-m03)       <target port='0'/>
	I0916 10:40:10.484063   22121 main.go:141] libmachine: (ha-244475-m03)     </serial>
	I0916 10:40:10.484072   22121 main.go:141] libmachine: (ha-244475-m03)     <console type='pty'>
	I0916 10:40:10.484087   22121 main.go:141] libmachine: (ha-244475-m03)       <target type='serial' port='0'/>
	I0916 10:40:10.484099   22121 main.go:141] libmachine: (ha-244475-m03)     </console>
	I0916 10:40:10.484108   22121 main.go:141] libmachine: (ha-244475-m03)     <rng model='virtio'>
	I0916 10:40:10.484116   22121 main.go:141] libmachine: (ha-244475-m03)       <backend model='random'>/dev/random</backend>
	I0916 10:40:10.484122   22121 main.go:141] libmachine: (ha-244475-m03)     </rng>
	I0916 10:40:10.484126   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.484132   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.484137   22121 main.go:141] libmachine: (ha-244475-m03)   </devices>
	I0916 10:40:10.484143   22121 main.go:141] libmachine: (ha-244475-m03) </domain>
	I0916 10:40:10.484163   22121 main.go:141] libmachine: (ha-244475-m03) 
	I0916 10:40:10.491278   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:3c:e8:d0 in network default
	I0916 10:40:10.491751   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring networks are active...
	I0916 10:40:10.491768   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:10.492390   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring network default is active
	I0916 10:40:10.492675   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring network mk-ha-244475 is active
	I0916 10:40:10.493062   22121 main.go:141] libmachine: (ha-244475-m03) Getting domain xml...
	I0916 10:40:10.493756   22121 main.go:141] libmachine: (ha-244475-m03) Creating domain...
	I0916 10:40:11.721484   22121 main.go:141] libmachine: (ha-244475-m03) Waiting to get IP...
	I0916 10:40:11.722386   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:11.722825   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:11.722864   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:11.722811   22888 retry.go:31] will retry after 192.331481ms: waiting for machine to come up
	I0916 10:40:11.917419   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:11.917971   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:11.918005   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:11.917942   22888 retry.go:31] will retry after 286.90636ms: waiting for machine to come up
	I0916 10:40:12.206353   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:12.206819   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:12.206842   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:12.206741   22888 retry.go:31] will retry after 454.064197ms: waiting for machine to come up
	I0916 10:40:12.662050   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:12.662526   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:12.662551   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:12.662476   22888 retry.go:31] will retry after 438.548468ms: waiting for machine to come up
	I0916 10:40:13.103062   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:13.103558   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:13.103595   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:13.103500   22888 retry.go:31] will retry after 487.216711ms: waiting for machine to come up
	I0916 10:40:13.592041   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:13.592483   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:13.592504   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:13.592433   22888 retry.go:31] will retry after 609.860378ms: waiting for machine to come up
	I0916 10:40:14.204217   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:14.204729   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:14.204756   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:14.204687   22888 retry.go:31] will retry after 1.08416226s: waiting for machine to come up
	I0916 10:40:15.290010   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:15.290367   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:15.290395   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:15.290306   22888 retry.go:31] will retry after 1.14272633s: waiting for machine to come up
	I0916 10:40:16.434131   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:16.434447   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:16.434482   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:16.434408   22888 retry.go:31] will retry after 1.591492555s: waiting for machine to come up
	I0916 10:40:18.027328   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:18.027798   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:18.027827   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:18.027750   22888 retry.go:31] will retry after 1.626003631s: waiting for machine to come up
	I0916 10:40:19.655097   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:19.655517   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:19.655538   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:19.655472   22888 retry.go:31] will retry after 2.828805673s: waiting for machine to come up
	I0916 10:40:22.487722   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:22.488228   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:22.488249   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:22.488180   22888 retry.go:31] will retry after 2.947934423s: waiting for machine to come up
	I0916 10:40:25.437771   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:25.438163   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:25.438187   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:25.438126   22888 retry.go:31] will retry after 4.191813461s: waiting for machine to come up
	I0916 10:40:29.634188   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:29.634591   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:29.634611   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:29.634550   22888 retry.go:31] will retry after 4.912264836s: waiting for machine to come up
	I0916 10:40:34.550076   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.550468   22121 main.go:141] libmachine: (ha-244475-m03) Found IP for machine: 192.168.39.127
	I0916 10:40:34.550500   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.550516   22121 main.go:141] libmachine: (ha-244475-m03) Reserving static IP address...
	I0916 10:40:34.550823   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find host DHCP lease matching {name: "ha-244475-m03", mac: "52:54:00:e0:15:60", ip: "192.168.39.127"} in network mk-ha-244475
	I0916 10:40:34.624068   22121 main.go:141] libmachine: (ha-244475-m03) Reserved static IP address: 192.168.39.127
	I0916 10:40:34.624092   22121 main.go:141] libmachine: (ha-244475-m03) Waiting for SSH to be available...
	I0916 10:40:34.624101   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Getting to WaitForSSH function...
	I0916 10:40:34.626630   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.627078   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.627178   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.627199   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using SSH client type: external
	I0916 10:40:34.627216   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa (-rw-------)
	I0916 10:40:34.627249   22121 main.go:141] libmachine: (ha-244475-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:40:34.627256   22121 main.go:141] libmachine: (ha-244475-m03) DBG | About to run SSH command:
	I0916 10:40:34.627270   22121 main.go:141] libmachine: (ha-244475-m03) DBG | exit 0
	I0916 10:40:34.749330   22121 main.go:141] libmachine: (ha-244475-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 10:40:34.749611   22121 main.go:141] libmachine: (ha-244475-m03) KVM machine creation complete!
	I0916 10:40:34.749933   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:34.750501   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:34.750684   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:34.750811   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:40:34.750833   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:40:34.752727   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:40:34.752744   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:40:34.752751   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:40:34.752759   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.755291   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.755682   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.755717   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.755865   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.756023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.756183   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.756327   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.756485   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.756665   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.756675   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:40:34.856271   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:34.856293   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:40:34.856300   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.859855   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.860190   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.860221   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.860431   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.860594   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.860766   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.860894   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.861049   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.861260   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.861271   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:40:34.970117   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:40:34.970189   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:40:34.970202   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:40:34.970213   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:34.970470   22121 buildroot.go:166] provisioning hostname "ha-244475-m03"
	I0916 10:40:34.970497   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:34.970663   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.973291   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.973662   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.973691   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.973816   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.973997   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.974137   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.974267   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.974444   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.974644   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.974660   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475-m03 && echo "ha-244475-m03" | sudo tee /etc/hostname
	I0916 10:40:35.095518   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475-m03
	
	I0916 10:40:35.095558   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.098544   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.098924   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.098964   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.099171   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.099391   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.099555   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.099700   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.099862   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.100037   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.100059   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:40:35.210957   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:35.210985   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:40:35.211006   22121 buildroot.go:174] setting up certificates
	I0916 10:40:35.211018   22121 provision.go:84] configureAuth start
	I0916 10:40:35.211028   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:35.211274   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:35.213869   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.214151   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.214179   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.214333   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.216656   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.217068   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.217094   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.217230   22121 provision.go:143] copyHostCerts
	I0916 10:40:35.217262   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:40:35.217292   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:40:35.217301   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:40:35.217370   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:40:35.217472   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:40:35.217491   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:40:35.217498   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:40:35.217524   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:40:35.217564   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:40:35.217581   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:40:35.217587   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:40:35.217606   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:40:35.217660   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475-m03 san=[127.0.0.1 192.168.39.127 ha-244475-m03 localhost minikube]
	I0916 10:40:35.412945   22121 provision.go:177] copyRemoteCerts
	I0916 10:40:35.412999   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:40:35.413023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.415370   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.415731   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.415761   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.415904   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.416091   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.416250   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.416351   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:35.501393   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:40:35.501489   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:40:35.529014   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:40:35.529098   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:40:35.555006   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:40:35.555088   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:40:35.580082   22121 provision.go:87] duration metric: took 369.052998ms to configureAuth
	I0916 10:40:35.580114   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:40:35.580375   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:35.580459   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.582981   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.583302   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.583338   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.583522   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.583678   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.583829   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.583953   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.584080   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.584280   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.584295   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:40:35.804379   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:40:35.804403   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:40:35.804410   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetURL
	I0916 10:40:35.805786   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using libvirt version 6000000
	I0916 10:40:35.807818   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.808192   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.808220   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.808371   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:40:35.808384   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:40:35.808390   22121 client.go:171] duration metric: took 26.005363468s to LocalClient.Create
	I0916 10:40:35.808410   22121 start.go:167] duration metric: took 26.005420857s to libmachine.API.Create "ha-244475"
	I0916 10:40:35.808417   22121 start.go:293] postStartSetup for "ha-244475-m03" (driver="kvm2")
	I0916 10:40:35.808441   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:40:35.808457   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:35.808682   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:40:35.808703   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.810634   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.810894   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.810919   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.811023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.811207   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.811350   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.811483   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:35.891724   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:40:35.896159   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:40:35.896180   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:40:35.896236   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:40:35.896302   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:40:35.896311   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:40:35.896394   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:40:35.906252   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:40:35.931184   22121 start.go:296] duration metric: took 122.750991ms for postStartSetup
	I0916 10:40:35.931237   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:35.931826   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:35.934282   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.934635   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.934663   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.934920   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:35.935111   22121 start.go:128] duration metric: took 26.150558333s to createHost
	I0916 10:40:35.935133   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.937290   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.937626   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.937654   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.937784   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.937961   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.938124   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.938226   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.938360   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.938514   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.938523   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:40:36.038169   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483236.017253853
	
	I0916 10:40:36.038199   22121 fix.go:216] guest clock: 1726483236.017253853
	I0916 10:40:36.038211   22121 fix.go:229] Guest: 2024-09-16 10:40:36.017253853 +0000 UTC Remote: 2024-09-16 10:40:35.935121788 +0000 UTC m=+143.767887540 (delta=82.132065ms)
	I0916 10:40:36.038234   22121 fix.go:200] guest clock delta is within tolerance: 82.132065ms
	I0916 10:40:36.038242   22121 start.go:83] releasing machines lock for "ha-244475-m03", held for 26.253815031s
	I0916 10:40:36.038269   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.038526   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:36.041199   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.041528   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.041557   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.043873   22121 out.go:177] * Found network options:
	I0916 10:40:36.045262   22121 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.222
	W0916 10:40:36.046405   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:40:36.046427   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:40:36.046443   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.046990   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.047176   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.047272   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:40:36.047304   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	W0916 10:40:36.047328   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:40:36.047347   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:40:36.047416   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:40:36.047437   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:36.049999   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050208   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050428   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.050455   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050554   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:36.050601   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.050626   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050708   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:36.050785   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:36.050860   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:36.050941   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:36.051014   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:36.051036   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:36.051131   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:36.283731   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:40:36.291646   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:40:36.291714   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:40:36.309353   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:40:36.309377   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:40:36.309434   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:40:36.327071   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:40:36.341542   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:40:36.341601   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:40:36.355583   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:40:36.369888   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:40:36.493273   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:40:36.643904   22121 docker.go:233] disabling docker service ...
	I0916 10:40:36.643965   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:40:36.658738   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:40:36.672641   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:40:36.816431   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:40:36.933082   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:40:36.949104   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:40:36.970988   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:40:36.971047   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:36.982120   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:40:36.982182   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:36.993929   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.005695   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.018804   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:40:37.031297   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.042548   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.060622   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.071900   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:40:37.082293   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:40:37.082349   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:40:37.096317   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:40:37.107422   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:40:37.228410   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:40:37.320979   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:40:37.321071   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:40:37.326439   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:40:37.326501   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:40:37.330626   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:40:37.369842   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:40:37.369916   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:40:37.402403   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:40:37.437976   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:40:37.439411   22121 out.go:177]   - env NO_PROXY=192.168.39.19
	I0916 10:40:37.440926   22121 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.222
	I0916 10:40:37.442203   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:37.444743   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:37.445187   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:37.445214   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:37.445428   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:40:37.449788   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:40:37.464525   22121 mustload.go:65] Loading cluster: ha-244475
	I0916 10:40:37.464778   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:37.465171   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:37.465220   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:37.480904   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0916 10:40:37.481370   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:37.481925   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:37.481949   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:37.482292   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:37.482464   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:40:37.484020   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:40:37.484287   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:37.484324   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:37.498953   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I0916 10:40:37.499388   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:37.499929   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:37.499955   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:37.500321   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:37.500505   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:40:37.500708   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.127
	I0916 10:40:37.500720   22121 certs.go:194] generating shared ca certs ...
	I0916 10:40:37.500740   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.500875   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:40:37.500929   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:40:37.500943   22121 certs.go:256] generating profile certs ...
	I0916 10:40:37.501030   22121 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:40:37.501062   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b
	I0916 10:40:37.501082   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.127 192.168.39.254]
	I0916 10:40:37.647069   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b ...
	I0916 10:40:37.647103   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b: {Name:mkbb6bf2be5e587ad1e2fe147b3983eed0461a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.647322   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b ...
	I0916 10:40:37.647347   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b: {Name:mk98dd7442f0dc4e7003471cb55a0345916f7a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.647450   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:40:37.647652   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:40:37.647850   22121 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:40:37.647872   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:40:37.647891   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:40:37.647911   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:40:37.647929   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:40:37.647946   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:40:37.647963   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:40:37.647981   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:40:37.647998   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:40:37.648062   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:40:37.648100   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:40:37.648112   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:40:37.648144   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:40:37.648175   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:40:37.648204   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:40:37.648262   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:40:37.648302   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:40:37.648320   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:37.648380   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:40:37.648422   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:40:37.651389   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:37.651840   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:40:37.651860   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:37.652040   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:40:37.652216   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:40:37.652315   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:40:37.652394   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:40:37.729506   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:40:37.734982   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:40:37.746820   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:40:37.751379   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 10:40:37.763059   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:40:37.767743   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:40:37.780679   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:40:37.785070   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:40:37.796662   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:40:37.801157   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:40:37.812496   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:40:37.817564   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:40:37.829016   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:40:37.857371   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:40:37.883089   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:40:37.908995   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:40:37.935029   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:40:37.960446   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:40:37.986136   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:40:38.012431   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:40:38.047057   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:40:38.075002   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:40:38.101902   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:40:38.129296   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:40:38.148327   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 10:40:38.165421   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:40:38.182509   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:40:38.200200   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:40:38.216843   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:40:38.233538   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:40:38.250144   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:40:38.256117   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:40:38.267112   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.271742   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.271789   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.277670   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:40:38.288768   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:40:38.299987   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.304531   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.304588   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.310343   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:40:38.321868   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:40:38.333013   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.337929   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.337983   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.343812   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:40:38.354695   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:40:38.358776   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:40:38.358821   22121 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.31.1 crio true true} ...
	I0916 10:40:38.358893   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:40:38.358916   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:40:38.358947   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:40:38.376976   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:40:38.377036   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:40:38.377091   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:40:38.386658   22121 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:40:38.386709   22121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:40:38.397169   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:40:38.397180   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:40:38.397205   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:40:38.397221   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:38.397225   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:40:38.397245   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:40:38.397272   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:40:38.397322   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:40:38.414712   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:40:38.414816   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:40:38.414828   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 10:40:38.414843   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 10:40:38.414851   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:40:38.414867   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:40:38.425835   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 10:40:38.425882   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:40:39.292544   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:40:39.302520   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:40:39.321739   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:40:39.339714   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:40:39.356647   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:40:39.360860   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:40:39.373051   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:40:39.503177   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:40:39.521517   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:40:39.521933   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:39.521999   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:39.539241   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0916 10:40:39.539779   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:39.540277   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:39.540296   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:39.540592   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:39.540793   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:40:39.540980   22121 start.go:317] joinCluster: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:39.541103   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:40:39.541140   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:40:39.544084   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:39.544467   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:40:39.544489   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:39.544609   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:40:39.544797   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:40:39.544947   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:40:39.545069   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:40:39.712936   22121 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:40:39.712986   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4c794a.yzkn6fbxc862odl2 --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0916 10:41:02.405074   22121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4c794a.yzkn6fbxc862odl2 --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (22.692059229s)
	I0916 10:41:02.405117   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:41:02.989273   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475-m03 minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=false
	I0916 10:41:03.155780   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-244475-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:41:03.294611   22121 start.go:319] duration metric: took 23.75362709s to joinCluster
	I0916 10:41:03.294689   22121 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:41:03.295014   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:03.296058   22121 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:03.297444   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:03.509480   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:03.527697   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:41:03.527973   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:41:03.528069   22121 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I0916 10:41:03.528297   22121 node_ready.go:35] waiting up to 6m0s for node "ha-244475-m03" to be "Ready" ...
	I0916 10:41:03.528381   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:03.528392   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:03.528403   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:03.528409   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:03.535009   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:04.028547   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:04.028568   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:04.028577   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:04.028590   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:04.032000   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:04.528593   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:04.528621   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:04.528632   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:04.528639   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:04.531853   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:05.028474   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:05.028495   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:05.028507   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:05.028510   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:05.031970   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:05.529004   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:05.529030   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:05.529040   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:05.529046   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:05.534346   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:05.535149   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:06.028524   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:06.028552   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:06.028563   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:06.028568   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:06.031926   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:06.529358   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:06.529383   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:06.529396   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:06.529402   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:06.535725   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:07.028522   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:07.028543   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:07.028551   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:07.028557   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:07.032906   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:07.529385   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:07.529413   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:07.529425   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:07.529431   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:07.535794   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:07.536408   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:08.029514   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:08.029549   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:08.029561   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:08.029567   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:08.032852   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:08.528497   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:08.528520   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:08.528529   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:08.528535   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:08.532921   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:09.028942   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:09.028962   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:09.028969   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:09.028972   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:09.032474   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:09.528551   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:09.528576   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:09.528586   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:09.528591   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:09.532995   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:10.028544   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:10.028577   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:10.028584   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:10.028588   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:10.032079   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:10.032575   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:10.528902   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:10.528926   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:10.528934   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:10.528938   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:10.535638   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:11.028651   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:11.028672   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:11.028679   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:11.028682   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:11.032105   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:11.529486   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:11.529515   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:11.529526   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:11.529531   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:11.535563   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:12.029412   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:12.029432   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:12.029440   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:12.029444   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:12.033149   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:12.033738   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:12.528711   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:12.528733   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:12.528742   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:12.528746   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:12.534586   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:13.029512   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:13.029536   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:13.029547   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:13.029553   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:13.033681   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:13.529522   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:13.529548   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:13.529559   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:13.529566   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:13.533930   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:14.029172   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:14.029194   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:14.029202   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:14.029206   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:14.032272   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:14.529072   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:14.529094   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:14.529102   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:14.529107   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:14.535318   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:14.535890   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:15.029077   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:15.029101   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:15.029113   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:15.029122   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:15.032652   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:15.528843   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:15.528869   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:15.528876   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:15.528883   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:15.533117   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:16.028968   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:16.028990   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:16.028998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:16.029002   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:16.032289   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:16.528776   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:16.528800   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:16.528812   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:16.528820   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:16.532317   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:17.029247   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:17.029273   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:17.029283   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:17.029289   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:17.032437   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:17.032978   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:17.528914   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:17.528940   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:17.528951   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:17.528957   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:17.535109   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:18.028865   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:18.028886   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:18.028894   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:18.028897   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:18.032181   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:18.529133   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:18.529160   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:18.529172   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:18.529177   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:18.532540   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:19.028551   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:19.028571   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:19.028579   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:19.028584   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:19.031968   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:19.529456   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:19.529479   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:19.529487   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:19.529492   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:19.535044   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:19.535889   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:20.029083   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.029103   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.029111   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.029114   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.032351   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.529324   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.529353   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.529370   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.529376   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.532351   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.532942   22121 node_ready.go:49] node "ha-244475-m03" has status "Ready":"True"
	I0916 10:41:20.532967   22121 node_ready.go:38] duration metric: took 17.004653976s for node "ha-244475-m03" to be "Ready" ...
	I0916 10:41:20.532978   22121 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:20.533057   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:20.533074   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.533084   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.533092   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.541611   22121 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:41:20.549215   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.549300   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lzrg2
	I0916 10:41:20.549309   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.549316   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.549321   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.551990   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.552792   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.552807   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.552814   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.552819   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.555246   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.556034   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.556051   22121 pod_ready.go:82] duration metric: took 6.810223ms for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.556059   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.556109   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-m8fd7
	I0916 10:41:20.556118   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.556124   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.556129   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.558530   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.559188   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.559202   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.559209   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.559212   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.561354   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.561890   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.561910   22121 pod_ready.go:82] duration metric: took 5.84501ms for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.561921   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.561982   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475
	I0916 10:41:20.561993   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.561999   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.562003   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.564349   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.565030   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.565042   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.565047   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.565051   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.567656   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.568101   22121 pod_ready.go:93] pod "etcd-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.568115   22121 pod_ready.go:82] duration metric: took 6.18818ms for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.568126   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.568174   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m02
	I0916 10:41:20.568183   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.568191   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.568196   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.571051   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.572108   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:20.572122   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.572131   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.572136   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.574514   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.574938   22121 pod_ready.go:93] pod "etcd-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.574958   22121 pod_ready.go:82] duration metric: took 6.825238ms for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.574968   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.730339   22121 request.go:632] Waited for 155.28324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m03
	I0916 10:41:20.730409   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m03
	I0916 10:41:20.730416   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.730426   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.730434   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.733792   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.929868   22121 request.go:632] Waited for 195.353662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.929934   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.929941   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.929951   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.929956   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.933157   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.933861   22121 pod_ready.go:93] pod "etcd-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.933879   22121 pod_ready.go:82] duration metric: took 358.903224ms for pod "etcd-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.933899   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.130218   22121 request.go:632] Waited for 196.250965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:41:21.130279   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:41:21.130287   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.130297   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.130307   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.133197   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:21.330203   22121 request.go:632] Waited for 196.304187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:21.330250   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:21.330254   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.330262   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.330265   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.333309   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:21.333928   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:21.333946   22121 pod_ready.go:82] duration metric: took 400.041237ms for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.333957   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.530002   22121 request.go:632] Waited for 195.934393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:41:21.530071   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:41:21.530079   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.530089   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.530097   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.540600   22121 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:41:21.729634   22121 request.go:632] Waited for 188.35156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:21.729700   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:21.729712   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.729727   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.729736   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.733214   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:21.733789   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:21.733804   22121 pod_ready.go:82] duration metric: took 399.837781ms for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.733813   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.930001   22121 request.go:632] Waited for 196.125954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m03
	I0916 10:41:21.930071   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m03
	I0916 10:41:21.930080   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.930088   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.930093   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.933477   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.129642   22121 request.go:632] Waited for 195.348961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:22.129729   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:22.129740   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.129750   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.129758   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.137037   22121 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:41:22.137643   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.137664   22121 pod_ready.go:82] duration metric: took 403.843897ms for pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.137678   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.329532   22121 request.go:632] Waited for 191.776666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:41:22.329621   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:41:22.329633   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.329640   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.329645   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.333345   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.530006   22121 request.go:632] Waited for 195.956457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:22.530079   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:22.530085   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.530093   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.530101   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.533113   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:22.533700   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.533718   22121 pod_ready.go:82] duration metric: took 396.032752ms for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.533728   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.729791   22121 request.go:632] Waited for 195.998005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:41:22.729857   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:41:22.729864   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.729874   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.729910   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.734399   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:22.929502   22121 request.go:632] Waited for 194.264694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:22.929574   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:22.929582   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.929591   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.929595   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.932871   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.934055   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.934073   22121 pod_ready.go:82] duration metric: took 400.337784ms for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.934082   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.130261   22121 request.go:632] Waited for 196.120217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m03
	I0916 10:41:23.130357   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m03
	I0916 10:41:23.130367   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.130375   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.130380   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.134472   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:23.329661   22121 request.go:632] Waited for 194.357343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:23.329723   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:23.329733   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.329747   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.329754   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.333236   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:23.333984   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:23.334009   22121 pod_ready.go:82] duration metric: took 399.919835ms for pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.334026   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.530101   22121 request.go:632] Waited for 195.996765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:41:23.530191   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:41:23.530198   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.530208   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.530219   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.535501   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:23.729541   22121 request.go:632] Waited for 193.385559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:23.729601   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:23.729606   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.729613   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.729627   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.733179   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:23.733969   22121 pod_ready.go:93] pod "kube-proxy-crttt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:23.733986   22121 pod_ready.go:82] duration metric: took 399.951283ms for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.733995   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5v5l" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.929754   22121 request.go:632] Waited for 195.67228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5v5l
	I0916 10:41:23.929814   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5v5l
	I0916 10:41:23.929819   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.929826   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.929831   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.933527   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.129706   22121 request.go:632] Waited for 195.381059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:24.129770   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:24.129776   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.129786   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.129794   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.133530   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.134153   22121 pod_ready.go:93] pod "kube-proxy-g5v5l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.134171   22121 pod_ready.go:82] duration metric: took 400.17004ms for pod "kube-proxy-g5v5l" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.134180   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.330300   22121 request.go:632] Waited for 196.037638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:41:24.330367   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:41:24.330373   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.330384   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.330391   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.334038   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.530069   22121 request.go:632] Waited for 195.337849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:24.530145   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:24.530153   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.530160   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.530165   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.536414   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:24.536846   22121 pod_ready.go:93] pod "kube-proxy-t454b" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.536864   22121 pod_ready.go:82] duration metric: took 402.676992ms for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.536876   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.730273   22121 request.go:632] Waited for 193.335182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:41:24.730344   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:41:24.730349   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.730357   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.730365   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.733832   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.930161   22121 request.go:632] Waited for 195.330427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:24.930225   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:24.930241   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.930250   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.930259   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.933553   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.934318   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.934335   22121 pod_ready.go:82] duration metric: took 397.451613ms for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.934344   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.129510   22121 request.go:632] Waited for 195.10302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:41:25.129579   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:41:25.129587   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.129595   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.129600   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.133734   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:25.329835   22121 request.go:632] Waited for 195.396951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:25.329904   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:25.329912   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.329922   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.329928   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.333482   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:25.334323   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:25.334342   22121 pod_ready.go:82] duration metric: took 399.990647ms for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.334355   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.529377   22121 request.go:632] Waited for 194.946933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m03
	I0916 10:41:25.529470   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m03
	I0916 10:41:25.529482   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.529493   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.529501   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.534845   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:25.729925   22121 request.go:632] Waited for 194.359506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:25.729987   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:25.729993   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.730000   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.730005   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.733288   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:25.734036   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:25.734056   22121 pod_ready.go:82] duration metric: took 399.693479ms for pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.734069   22121 pod_ready.go:39] duration metric: took 5.201079342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:25.734086   22121 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:41:25.734140   22121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:25.749396   22121 api_server.go:72] duration metric: took 22.454672004s to wait for apiserver process to appear ...
	I0916 10:41:25.749425   22121 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:41:25.749447   22121 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0916 10:41:25.753676   22121 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0916 10:41:25.753738   22121 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I0916 10:41:25.753749   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.753760   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.753769   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.755474   22121 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:25.755537   22121 api_server.go:141] control plane version: v1.31.1
	I0916 10:41:25.755552   22121 api_server.go:131] duration metric: took 6.119804ms to wait for apiserver health ...
	I0916 10:41:25.755561   22121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:41:25.929957   22121 request.go:632] Waited for 174.326859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:25.930008   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:25.930013   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.930020   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.930029   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.936785   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:25.943643   22121 system_pods.go:59] 24 kube-system pods found
	I0916 10:41:25.943669   22121 system_pods.go:61] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:41:25.943674   22121 system_pods.go:61] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:41:25.943678   22121 system_pods.go:61] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:41:25.943682   22121 system_pods.go:61] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:41:25.943685   22121 system_pods.go:61] "etcd-ha-244475-m03" [e741d8c7-f12c-4fa1-b3cc-582043ca312d] Running
	I0916 10:41:25.943688   22121 system_pods.go:61] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:41:25.943691   22121 system_pods.go:61] "kindnet-rzwwj" [ffe109a7-d477-4b8a-ab62-4e4ceec1b4ed] Running
	I0916 10:41:25.943695   22121 system_pods.go:61] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:41:25.943698   22121 system_pods.go:61] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:41:25.943701   22121 system_pods.go:61] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:41:25.943704   22121 system_pods.go:61] "kube-apiserver-ha-244475-m03" [469c5743-509f-4c1c-b46e-fa3e6e79a673] Running
	I0916 10:41:25.943707   22121 system_pods.go:61] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:41:25.943710   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:41:25.943713   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m03" [1054e7df-9598-41de-a412-f18d3ffff1cb] Running
	I0916 10:41:25.943716   22121 system_pods.go:61] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:41:25.943719   22121 system_pods.go:61] "kube-proxy-g5v5l" [102f8d6f-4cb4-4c59-ae99-acccabb9fb8e] Running
	I0916 10:41:25.943723   22121 system_pods.go:61] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:41:25.943726   22121 system_pods.go:61] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:41:25.943729   22121 system_pods.go:61] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:41:25.943731   22121 system_pods.go:61] "kube-scheduler-ha-244475-m03" [90b5bffb-165c-4620-b90a-e9f1d3f4c323] Running
	I0916 10:41:25.943734   22121 system_pods.go:61] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:41:25.943737   22121 system_pods.go:61] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:41:25.943740   22121 system_pods.go:61] "kube-vip-ha-244475-m03" [b507cf83-f056-4ab3-b276-4f477ee77747] Running
	I0916 10:41:25.943743   22121 system_pods.go:61] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:41:25.943748   22121 system_pods.go:74] duration metric: took 188.180661ms to wait for pod list to return data ...
	I0916 10:41:25.943758   22121 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:41:26.130184   22121 request.go:632] Waited for 186.361022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:26.130240   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:26.130247   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.130256   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.130263   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.136218   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:26.136355   22121 default_sa.go:45] found service account: "default"
	I0916 10:41:26.136373   22121 default_sa.go:55] duration metric: took 192.608031ms for default service account to be created ...
	I0916 10:41:26.136384   22121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:41:26.329960   22121 request.go:632] Waited for 193.503475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:26.330035   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:26.330046   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.330056   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.330062   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.336265   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:26.343431   22121 system_pods.go:86] 24 kube-system pods found
	I0916 10:41:26.343459   22121 system_pods.go:89] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:41:26.343464   22121 system_pods.go:89] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:41:26.343468   22121 system_pods.go:89] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:41:26.343471   22121 system_pods.go:89] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:41:26.343474   22121 system_pods.go:89] "etcd-ha-244475-m03" [e741d8c7-f12c-4fa1-b3cc-582043ca312d] Running
	I0916 10:41:26.343477   22121 system_pods.go:89] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:41:26.343481   22121 system_pods.go:89] "kindnet-rzwwj" [ffe109a7-d477-4b8a-ab62-4e4ceec1b4ed] Running
	I0916 10:41:26.343485   22121 system_pods.go:89] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:41:26.343490   22121 system_pods.go:89] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:41:26.343495   22121 system_pods.go:89] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:41:26.343501   22121 system_pods.go:89] "kube-apiserver-ha-244475-m03" [469c5743-509f-4c1c-b46e-fa3e6e79a673] Running
	I0916 10:41:26.343509   22121 system_pods.go:89] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:41:26.343515   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:41:26.343524   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m03" [1054e7df-9598-41de-a412-f18d3ffff1cb] Running
	I0916 10:41:26.343530   22121 system_pods.go:89] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:41:26.343536   22121 system_pods.go:89] "kube-proxy-g5v5l" [102f8d6f-4cb4-4c59-ae99-acccabb9fb8e] Running
	I0916 10:41:26.343548   22121 system_pods.go:89] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:41:26.343554   22121 system_pods.go:89] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:41:26.343558   22121 system_pods.go:89] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:41:26.343563   22121 system_pods.go:89] "kube-scheduler-ha-244475-m03" [90b5bffb-165c-4620-b90a-e9f1d3f4c323] Running
	I0916 10:41:26.343567   22121 system_pods.go:89] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:41:26.343570   22121 system_pods.go:89] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:41:26.343573   22121 system_pods.go:89] "kube-vip-ha-244475-m03" [b507cf83-f056-4ab3-b276-4f477ee77747] Running
	I0916 10:41:26.343578   22121 system_pods.go:89] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:41:26.343589   22121 system_pods.go:126] duration metric: took 207.195971ms to wait for k8s-apps to be running ...
	I0916 10:41:26.343599   22121 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:41:26.343650   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:41:26.359495   22121 system_svc.go:56] duration metric: took 15.88709ms WaitForService to wait for kubelet
	I0916 10:41:26.359526   22121 kubeadm.go:582] duration metric: took 23.064804714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:26.359547   22121 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:41:26.529951   22121 request.go:632] Waited for 170.330403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I0916 10:41:26.530026   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I0916 10:41:26.530033   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.530043   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.530050   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.536030   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:26.537495   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537520   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537534   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537539   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537545   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537549   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537554   22121 node_conditions.go:105] duration metric: took 178.001679ms to run NodePressure ...
	I0916 10:41:26.537572   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:41:26.537599   22121 start.go:255] writing updated cluster config ...
	I0916 10:41:26.538305   22121 ssh_runner.go:195] Run: rm -f paused
	I0916 10:41:26.548959   22121 out.go:177] * Done! kubectl is now configured to use "ha-244475" cluster and "default" namespace by default
	E0916 10:41:26.550066   22121 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.844705994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483501844469947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8aaa4779-8f2f-416d-97c5-b79df03345d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.845714504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67c6f167-819a-4cc5-b481-cf496ae6fb7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.845994344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67c6f167-819a-4cc5-b481-cf496ae6fb7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.846451721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67c6f167-819a-4cc5-b481-cf496ae6fb7f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.885756935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=897f1ccc-51d8-4b43-a8dd-9002c0e35cbb name=/runtime.v1.RuntimeService/Version
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.885863143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=897f1ccc-51d8-4b43-a8dd-9002c0e35cbb name=/runtime.v1.RuntimeService/Version
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.886789702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8aa4bdd5-38c1-46de-8fb0-9c129bdbe8b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.887254566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483501887232009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8aa4bdd5-38c1-46de-8fb0-9c129bdbe8b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.887785965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5025a0b-e9f0-4d39-a203-79a16589b4f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.887863301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5025a0b-e9f0-4d39-a203-79a16589b4f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.888136556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5025a0b-e9f0-4d39-a203-79a16589b4f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.926445977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22e83e37-d14b-4285-b083-dc094676c081 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.926598805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22e83e37-d14b-4285-b083-dc094676c081 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.927970553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f0932be-d028-4023-91e8-842db571b761 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.928388435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483501928366178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f0932be-d028-4023-91e8-842db571b761 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.928894525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27f65c18-2203-4c62-8b3f-5e233000aae3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.928973301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27f65c18-2203-4c62-8b3f-5e233000aae3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.929236927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27f65c18-2203-4c62-8b3f-5e233000aae3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.968201055Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b1ec52d-92d5-4af8-a17a-7c9837dfebf9 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.968291327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b1ec52d-92d5-4af8-a17a-7c9837dfebf9 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.969192891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2be70eb7-9904-4d93-ab39-966fdf606c6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.969695600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483501969672734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2be70eb7-9904-4d93-ab39-966fdf606c6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.970262951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adbb8473-f239-4430-80ff-4a191e3de881 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.970333400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adbb8473-f239-4430-80ff-4a191e3de881 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:45:01 ha-244475 crio[667]: time="2024-09-16 10:45:01.970650618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adbb8473-f239-4430-80ff-4a191e3de881 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c701fcd74aba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ed1838f7506b4       busybox-7dff88458-d4m5s
	034030626ec02       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   159730a21bea6       coredns-7c65d6cfc9-m8fd7
	7f78c5e4a3a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   4d8c4f0a29bb7       coredns-7c65d6cfc9-lzrg2
	b16f64da09fae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   66086953ec65f       storage-provisioner
	ac63170bf5bb3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   9c8ab7a98f749       kindnet-7v2cl
	6e6d69b26d5c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   3fbb7c8e9af71       kube-proxy-crttt
	62c031e0ed0a9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f76913fe7302a       kube-vip-ha-244475
	a0223669288e2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   42a76bc40dc3e       kube-scheduler-ha-244475
	13162d4bf94f7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   ec0d4cf0dd9b7       kube-apiserver-ha-244475
	308650af833f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   693cfec22177d       etcd-ha-244475
	f16e87fb57b2b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   fad8ac85cdf54       kube-controller-manager-ha-244475
	
	
	==> coredns [034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3] <==
	[INFO] 10.244.2.2:43047 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.055244509s
	[INFO] 10.244.2.2:43779 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285925s
	[INFO] 10.244.2.2:49571 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000283044s
	[INFO] 10.244.2.2:57761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004222785s
	[INFO] 10.244.2.2:42931 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200783s
	[INFO] 10.244.0.4:33694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014309s
	[INFO] 10.244.0.4:35532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107639s
	[INFO] 10.244.0.4:53168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009525s
	[INFO] 10.244.0.4:50253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250965s
	[INFO] 10.244.0.4:40357 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089492s
	[INFO] 10.244.1.2:49152 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001985919s
	[INFO] 10.244.1.2:50396 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132748s
	[INFO] 10.244.2.2:38313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000951s
	[INFO] 10.244.0.4:43336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168268s
	[INFO] 10.244.0.4:44949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123895s
	[INFO] 10.244.0.4:52348 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107748s
	[INFO] 10.244.1.2:36649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286063s
	[INFO] 10.244.1.2:42747 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082265s
	[INFO] 10.244.2.2:45891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018425s
	[INFO] 10.244.2.2:53625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126302s
	[INFO] 10.244.2.2:44397 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109098s
	[INFO] 10.244.0.4:39956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013935s
	[INFO] 10.244.0.4:39139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008694s
	[INFO] 10.244.0.4:38933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060589s
	[INFO] 10.244.1.2:36849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146451s
	
	
	==> coredns [7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465] <==
	[INFO] 10.244.0.4:51676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000096142s
	[INFO] 10.244.1.2:33245 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001877876s
	[INFO] 10.244.2.2:52615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191836s
	[INFO] 10.244.2.2:49834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166519s
	[INFO] 10.244.2.2:39495 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127494s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001694487s
	[INFO] 10.244.0.4:36178 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091958s
	[INFO] 10.244.0.4:33247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160731s
	[INFO] 10.244.1.2:52512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150889s
	[INFO] 10.244.1.2:43450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182534s
	[INFO] 10.244.1.2:56403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150359s
	[INFO] 10.244.1.2:51246 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230547s
	[INFO] 10.244.1.2:39220 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090721s
	[INFO] 10.244.1.2:41766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155057s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153103s
	[INFO] 10.244.2.2:44469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099361s
	[INFO] 10.244.2.2:52465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086382s
	[INFO] 10.244.0.4:36474 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117775s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142151s
	[INFO] 10.244.1.2:39272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113629s
	[INFO] 10.244.2.2:43223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141566s
	[INFO] 10.244.0.4:36502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000282073s
	[INFO] 10.244.1.2:60302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207499s
	[INFO] 10.244.1.2:49950 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184993s
	[INFO] 10.244.1.2:54052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094371s
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:45:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:39:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m10s
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m5s
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m3s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m10s                  kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s                  kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s                  kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal  NodeReady                5m53s                  kubelet          Node ha-244475 status is now: NodeReady
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d827e65a-7fd8-4399-b348-231b704c25ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m16s
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m18s
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-244475-m02 status is now: NodeNotReady
	
	
	Name:               ha-244475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:44:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:41:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-244475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d01912e060494092a8b6a2df64a0a30c
	  System UUID:                d01912e0-6049-4092-a8b6-a2df64a0a30c
	  Boot ID:                    1fb9da41-3fb9-4db3-bca0-b0c15d7a9875
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7bhqg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-244475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m1s
	  kube-system                 kindnet-rzwwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m3s
	  kube-system                 kube-apiserver-ha-244475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-244475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-g5v5l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-ha-244475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-244475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-244475-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal  RegisteredNode           3m55s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:44:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    4513a05d-6164-4c3b-91e3-07f7c103c2f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dflt4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-kp7hv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  NodeReady                2m43s                kubelet          Node ha-244475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050568] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.803306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.430603] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.601752] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.139824] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054792] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058211] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173707] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.144769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.277555] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.915448] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.568561] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067639] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.970048] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3] <==
	{"level":"warn","ts":"2024-09-16T10:45:02.275131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.279552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.283046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.290921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.297364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.306092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.313997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.318096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.321959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.330409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.336241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.343015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.347208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.350862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.356681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.363083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.370356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.374830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.378348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.383235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.387174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.389972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.390050Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.390660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:45:02.397158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:45:02 up 6 min,  0 users,  load average: 0.15, 0.20, 0.10
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913] <==
	I0916 10:44:29.310141       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:39.308897       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:44:39.308947       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:44:39.309120       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:44:39.309146       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:39.309208       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:44:39.309229       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:44:39.309302       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:44:39.309323       1 main.go:299] handling current node
	I0916 10:44:49.306289       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:44:49.306429       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:49.306700       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:44:49.306759       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:44:49.306854       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:44:49.306887       1 main.go:299] handling current node
	I0916 10:44:49.306920       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:44:49.306930       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:44:59.300680       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:44:59.300795       1 main.go:299] handling current node
	I0916 10:44:59.300827       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:44:59.300846       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:44:59.301014       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:44:59.301045       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:44:59.301108       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:44:59.301127       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1] <==
	W0916 10:38:51.442192       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I0916 10:38:51.443345       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:38:51.448673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:38:51.657156       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:38:52.610073       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:38:52.629898       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:38:52.640941       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:38:57.207096       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:38:57.359795       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0916 10:39:51.439268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	E0916 10:41:30.050430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60486: use of closed network connection
	E0916 10:41:30.242968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60496: use of closed network connection
	E0916 10:41:30.422776       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60516: use of closed network connection
	E0916 10:41:30.667331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60540: use of closed network connection
	E0916 10:41:30.849977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60570: use of closed network connection
	E0916 10:41:31.026403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60598: use of closed network connection
	E0916 10:41:31.216159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60626: use of closed network connection
	E0916 10:41:31.408973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60648: use of closed network connection
	E0916 10:41:31.595323       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60664: use of closed network connection
	E0916 10:41:31.892210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33810: use of closed network connection
	E0916 10:41:32.120845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33824: use of closed network connection
	E0916 10:41:32.318310       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33836: use of closed network connection
	E0916 10:41:32.517544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33856: use of closed network connection
	E0916 10:41:32.715949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33878: use of closed network connection
	E0916 10:41:32.888744       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	
	
	==> kube-controller-manager [f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113] <==
	I0916 10:41:59.913033       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-244475-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:41:59.913138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:41:59.913216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:41:59.930942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:00.175642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:00.590484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:01.490254       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-244475-m04"
	I0916 10:42:01.528827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.011238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.079872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.261410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.376315       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:10.010776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:19.018320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:19.018457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:42:19.032789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:21.506056       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:30.158122       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:43:21.535925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:43:21.536431       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:43:21.581714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:43:21.707782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.322573ms"
	I0916 10:43:21.708000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="116.003µs"
	I0916 10:43:23.093063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:43:26.726408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	
	
	==> kube-proxy [6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:38:58.381104       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:38:58.405774       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E0916 10:38:58.405958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:38:58.486128       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:38:58.486191       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:38:58.486214       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:38:58.488718       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:38:58.489862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:38:58.489894       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:38:58.500489       1 config.go:199] "Starting service config controller"
	I0916 10:38:58.500804       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:38:58.501030       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:38:58.501051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:38:58.502033       1 config.go:328] "Starting node config controller"
	I0916 10:38:58.502063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:38:58.601173       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:38:58.601274       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:38:58.602581       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb] <==
	E0916 10:38:50.527717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.585028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:38:50.585078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.611653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:38:50.611726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.650971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:38:50.651023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.696031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:38:50.696092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.761221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:38:50.761274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.985092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:38:50.985144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.991955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:38:50.992011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.039856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:38:51.039907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.293677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:38:51.293783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:38:53.269920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:27.446213       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5" pod="default/busybox-7dff88458-7bhqg" assumedNode="ha-244475-m03" currentNode="ha-244475-m02"
	E0916 10:41:27.456948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m02"
	E0916 10:41:27.457071       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5(default/busybox-7dff88458-7bhqg) was assumed on ha-244475-m02 but assigned to ha-244475-m03" pod="default/busybox-7dff88458-7bhqg"
	E0916 10:41:27.457108       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" pod="default/busybox-7dff88458-7bhqg"
	I0916 10:41:27.457173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m03"
	
	
	==> kubelet <==
	Sep 16 10:43:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:43:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:43:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:43:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:43:52 ha-244475 kubelet[1309]: E0916 10:43:52.678246    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483432677899949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:43:52 ha-244475 kubelet[1309]: E0916 10:43:52.678272    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483432677899949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:02 ha-244475 kubelet[1309]: E0916 10:44:02.679461    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483442679122721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:02 ha-244475 kubelet[1309]: E0916 10:44:02.680062    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483442679122721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:12 ha-244475 kubelet[1309]: E0916 10:44:12.682999    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483452681690473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:12 ha-244475 kubelet[1309]: E0916 10:44:12.683057    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483452681690473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:22 ha-244475 kubelet[1309]: E0916 10:44:22.684640    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483462684321757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:22 ha-244475 kubelet[1309]: E0916 10:44:22.684691    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483462684321757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:32 ha-244475 kubelet[1309]: E0916 10:44:32.687623    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483472687126995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:32 ha-244475 kubelet[1309]: E0916 10:44:32.687693    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483472687126995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:42 ha-244475 kubelet[1309]: E0916 10:44:42.689226    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483482688660947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:42 ha-244475 kubelet[1309]: E0916 10:44:42.689565    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483482688660947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:52 ha-244475 kubelet[1309]: E0916 10:44:52.621462    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:44:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:44:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:44:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:44:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:44:52 ha-244475 kubelet[1309]: E0916 10:44:52.692628    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483492691856334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:52 ha-244475 kubelet[1309]: E0916 10:44:52.692654    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483492691856334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:02 ha-244475 kubelet[1309]: E0916 10:45:02.694249    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483502693420181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:02 ha-244475 kubelet[1309]: E0916 10:45:02.694279    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483502693420181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (469.935µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
E0916 10:45:08.821153   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (3.21371742s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:06.904684   26936 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:06.904785   26936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:06.904795   26936 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:06.904800   26936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:06.904990   26936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:06.905204   26936 out.go:352] Setting JSON to false
	I0916 10:45:06.905233   26936 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:06.905335   26936 notify.go:220] Checking for updates...
	I0916 10:45:06.905755   26936 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:06.905771   26936 status.go:255] checking status of ha-244475 ...
	I0916 10:45:06.906234   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:06.906274   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:06.924096   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0916 10:45:06.924572   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:06.925159   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:06.925201   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:06.925503   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:06.925652   26936 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:06.927352   26936 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:06.927371   26936 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:06.927690   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:06.927729   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:06.942920   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0916 10:45:06.943267   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:06.943667   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:06.943688   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:06.943996   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:06.944139   26936 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:06.946863   26936 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:06.947350   26936 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:06.947376   26936 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:06.947495   26936 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:06.947792   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:06.947831   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:06.962494   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0916 10:45:06.962920   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:06.963382   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:06.963400   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:06.963708   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:06.963889   26936 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:06.964088   26936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:06.964126   26936 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:06.966564   26936 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:06.967024   26936 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:06.967061   26936 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:06.967197   26936 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:06.967380   26936 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:06.967513   26936 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:06.967617   26936 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:07.058512   26936 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:07.067365   26936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:07.082880   26936 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:07.082916   26936 api_server.go:166] Checking apiserver status ...
	I0916 10:45:07.082957   26936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:07.103177   26936 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:07.118866   26936 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:07.118923   26936 ssh_runner.go:195] Run: ls
	I0916 10:45:07.126135   26936 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:07.133528   26936 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:07.133552   26936 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:07.133562   26936 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:07.133578   26936 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:07.133904   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:07.133939   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:07.149203   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I0916 10:45:07.149617   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:07.150061   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:07.150085   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:07.150447   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:07.150630   26936 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:07.152198   26936 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:45:07.152215   26936 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:07.152489   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:07.152521   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:07.167050   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0916 10:45:07.167539   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:07.168012   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:07.168032   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:07.168397   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:07.168577   26936 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:45:07.171929   26936 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:07.172488   26936 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:07.172526   26936 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:07.172665   26936 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:07.173068   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:07.173142   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:07.188285   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0916 10:45:07.188808   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:07.189337   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:07.189361   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:07.189697   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:07.189852   26936 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:45:07.190031   26936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:07.190052   26936 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:45:07.192594   26936 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:07.193121   26936 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:07.193156   26936 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:07.193252   26936 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:45:07.193415   26936 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:45:07.193545   26936 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:45:07.193688   26936 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	W0916 10:45:09.721404   26936 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:09.721508   26936 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0916 10:45:09.721523   26936 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:09.721530   26936 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:45:09.721546   26936 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:09.721554   26936 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:09.721888   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:09.721936   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:09.737302   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0916 10:45:09.737781   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:09.738305   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:09.738337   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:09.738641   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:09.738803   26936 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:09.740461   26936 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:09.740476   26936 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:09.740780   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:09.740822   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:09.755259   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
	I0916 10:45:09.755701   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:09.756194   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:09.756224   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:09.756515   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:09.756680   26936 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:09.759352   26936 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:09.759709   26936 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:09.759741   26936 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:09.759881   26936 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:09.760235   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:09.760273   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:09.775866   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0916 10:45:09.776259   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:09.776777   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:09.776799   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:09.777118   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:09.777324   26936 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:09.777503   26936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:09.777527   26936 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:09.780134   26936 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:09.780519   26936 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:09.780545   26936 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:09.780681   26936 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:09.780839   26936 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:09.780976   26936 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:09.781089   26936 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:09.864521   26936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:09.880754   26936 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:09.880789   26936 api_server.go:166] Checking apiserver status ...
	I0916 10:45:09.880833   26936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:09.895767   26936 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:09.906654   26936 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:09.906712   26936 ssh_runner.go:195] Run: ls
	I0916 10:45:09.911976   26936 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:09.916306   26936 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:09.916331   26936 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:09.916341   26936 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:09.916361   26936 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:09.916648   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:09.916694   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:09.932213   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0916 10:45:09.932647   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:09.933156   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:09.933178   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:09.933458   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:09.933618   26936 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:09.934983   26936 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:09.934997   26936 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:09.935279   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:09.935315   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:09.950666   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0916 10:45:09.951060   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:09.951509   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:09.951529   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:09.951855   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:09.952043   26936 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:09.954859   26936 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:09.955259   26936 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:09.955291   26936 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:09.955401   26936 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:09.955695   26936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:09.955728   26936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:09.971743   26936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0916 10:45:09.972127   26936 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:09.972505   26936 main.go:141] libmachine: Using API Version  1
	I0916 10:45:09.972531   26936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:09.972882   26936 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:09.973072   26936 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:09.973418   26936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:09.973438   26936 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:09.975950   26936 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:09.976454   26936 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:09.976483   26936 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:09.976579   26936 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:09.976735   26936 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:09.976874   26936 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:09.977004   26936 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:10.061193   26936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:10.075655   26936 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (5.355912039s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:10.903585   27036 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:10.903720   27036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:10.903730   27036 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:10.903735   27036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:10.903921   27036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:10.904128   27036 out.go:352] Setting JSON to false
	I0916 10:45:10.904164   27036 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:10.904269   27036 notify.go:220] Checking for updates...
	I0916 10:45:10.904664   27036 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:10.904680   27036 status.go:255] checking status of ha-244475 ...
	I0916 10:45:10.905099   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:10.905179   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:10.925160   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0916 10:45:10.925719   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:10.926265   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:10.926292   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:10.926729   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:10.926925   27036 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:10.928603   27036 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:10.928619   27036 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:10.928904   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:10.928947   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:10.943901   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I0916 10:45:10.944355   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:10.944797   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:10.944818   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:10.945111   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:10.945291   27036 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:10.948002   27036 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:10.948425   27036 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:10.948449   27036 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:10.948589   27036 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:10.948906   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:10.948965   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:10.964710   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0916 10:45:10.965114   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:10.965585   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:10.965609   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:10.965905   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:10.966070   27036 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:10.966242   27036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:10.966273   27036 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:10.968736   27036 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:10.969164   27036 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:10.969190   27036 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:10.969307   27036 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:10.969469   27036 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:10.969604   27036 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:10.969719   27036 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:11.054745   27036 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:11.061800   27036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:11.076900   27036 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:11.076933   27036 api_server.go:166] Checking apiserver status ...
	I0916 10:45:11.076971   27036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:11.094322   27036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:11.104422   27036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:11.104477   27036 ssh_runner.go:195] Run: ls
	I0916 10:45:11.109093   27036 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:11.115185   27036 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:11.115207   27036 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:11.115216   27036 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:11.115232   27036 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:11.115509   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:11.115552   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:11.132597   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44751
	I0916 10:45:11.132990   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:11.133524   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:11.133550   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:11.133867   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:11.134059   27036 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:11.135659   27036 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:45:11.135674   27036 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:11.135989   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:11.136035   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:11.150820   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
	I0916 10:45:11.151254   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:11.151801   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:11.151823   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:11.152113   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:11.152305   27036 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:45:11.155038   27036 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:11.155488   27036 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:11.155510   27036 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:11.155645   27036 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:11.155935   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:11.155968   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:11.173488   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0916 10:45:11.173913   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:11.174361   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:11.174387   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:11.174746   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:11.174941   27036 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:45:11.175125   27036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:11.175147   27036 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:45:11.177964   27036 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:11.178422   27036 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:11.178446   27036 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:11.178566   27036 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:45:11.178733   27036 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:45:11.178877   27036 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:45:11.178996   27036 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	W0916 10:45:12.793391   27036 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:12.793450   27036 retry.go:31] will retry after 310.64554ms: dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:15.865471   27036 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:15.865581   27036 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0916 10:45:15.865609   27036 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:15.865620   27036 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:45:15.865650   27036 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:15.865661   27036 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:15.866003   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:15.866058   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:15.882000   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41293
	I0916 10:45:15.882471   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:15.882992   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:15.883016   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:15.883303   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:15.883488   27036 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:15.885155   27036 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:15.885173   27036 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:15.885456   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:15.885515   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:15.901488   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0916 10:45:15.901838   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:15.902291   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:15.902312   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:15.902637   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:15.902819   27036 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:15.905695   27036 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:15.906154   27036 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:15.906195   27036 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:15.906308   27036 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:15.906737   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:15.906802   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:15.922340   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0916 10:45:15.922691   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:15.923160   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:15.923178   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:15.923453   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:15.923618   27036 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:15.923813   27036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:15.923854   27036 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:15.926578   27036 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:15.927021   27036 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:15.927048   27036 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:15.927177   27036 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:15.927326   27036 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:15.927449   27036 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:15.927729   27036 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:16.004530   27036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:16.020409   27036 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:16.020437   27036 api_server.go:166] Checking apiserver status ...
	I0916 10:45:16.020484   27036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:16.034750   27036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:16.045218   27036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:16.045285   27036 ssh_runner.go:195] Run: ls
	I0916 10:45:16.054014   27036 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:16.060084   27036 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:16.060112   27036 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:16.060122   27036 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:16.060148   27036 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:16.060439   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:16.060471   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:16.076319   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0916 10:45:16.076813   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:16.077410   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:16.077436   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:16.077804   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:16.077998   27036 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:16.079712   27036 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:16.079725   27036 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:16.080009   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:16.080093   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:16.095003   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0916 10:45:16.095418   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:16.095986   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:16.096013   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:16.096349   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:16.096546   27036 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:16.099481   27036 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:16.099938   27036 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:16.099966   27036 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:16.100136   27036 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:16.100497   27036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:16.100545   27036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:16.115899   27036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0916 10:45:16.116276   27036 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:16.116718   27036 main.go:141] libmachine: Using API Version  1
	I0916 10:45:16.116741   27036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:16.117018   27036 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:16.117223   27036 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:16.117418   27036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:16.117439   27036 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:16.120088   27036 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:16.120543   27036 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:16.120566   27036 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:16.120675   27036 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:16.120810   27036 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:16.120951   27036 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:16.121077   27036 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:16.204249   27036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:16.218915   27036 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (4.878559117s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:17.524595   27135 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:17.524710   27135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:17.524720   27135 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:17.524724   27135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:17.524896   27135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:17.525056   27135 out.go:352] Setting JSON to false
	I0916 10:45:17.525085   27135 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:17.525180   27135 notify.go:220] Checking for updates...
	I0916 10:45:17.525502   27135 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:17.525517   27135 status.go:255] checking status of ha-244475 ...
	I0916 10:45:17.526136   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:17.526171   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:17.544289   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0916 10:45:17.544904   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:17.545585   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:17.545613   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:17.545954   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:17.546156   27135 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:17.548301   27135 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:17.548318   27135 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:17.548655   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:17.548696   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:17.564167   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0916 10:45:17.564636   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:17.565205   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:17.565232   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:17.565610   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:17.565855   27135 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:17.568846   27135 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:17.569313   27135 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:17.569341   27135 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:17.569482   27135 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:17.569770   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:17.569807   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:17.584836   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0916 10:45:17.585274   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:17.585764   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:17.585780   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:17.586054   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:17.586228   27135 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:17.586396   27135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:17.586427   27135 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:17.589059   27135 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:17.589526   27135 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:17.589546   27135 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:17.589680   27135 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:17.589822   27135 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:17.589996   27135 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:17.590177   27135 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:17.673300   27135 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:17.679497   27135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:17.696078   27135 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:17.696110   27135 api_server.go:166] Checking apiserver status ...
	I0916 10:45:17.696141   27135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:17.711642   27135 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:17.722743   27135 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:17.722804   27135 ssh_runner.go:195] Run: ls
	I0916 10:45:17.727507   27135 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:17.731941   27135 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:17.731961   27135 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:17.731980   27135 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:17.731998   27135 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:17.732279   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:17.732310   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:17.747742   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36727
	I0916 10:45:17.748223   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:17.748743   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:17.748763   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:17.749026   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:17.749198   27135 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:17.750703   27135 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:45:17.750720   27135 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:17.751002   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:17.751032   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:17.765750   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0916 10:45:17.766175   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:17.766718   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:17.766749   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:17.767076   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:17.767264   27135 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:45:17.770088   27135 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:17.770542   27135 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:17.770570   27135 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:17.770734   27135 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:17.771046   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:17.771085   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:17.785906   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0916 10:45:17.786341   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:17.786776   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:17.786795   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:17.787138   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:17.787327   27135 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:45:17.787508   27135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:17.787527   27135 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:45:17.790303   27135 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:17.790709   27135 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:17.790737   27135 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:17.790891   27135 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:45:17.791063   27135 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:45:17.791199   27135 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:45:17.791327   27135 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	W0916 10:45:18.937412   27135 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:18.937457   27135 retry.go:31] will retry after 249.571664ms: dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:22.009400   27135 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:22.009498   27135 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0916 10:45:22.009516   27135 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:22.009524   27135 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:45:22.009551   27135 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:22.009561   27135 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:22.009903   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:22.009953   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:22.025462   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0916 10:45:22.025901   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:22.026348   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:22.026368   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:22.026765   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:22.026930   27135 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:22.028400   27135 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:22.028413   27135 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:22.028771   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:22.028811   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:22.043813   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0916 10:45:22.044293   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:22.044806   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:22.044838   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:22.045201   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:22.045399   27135 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:22.048136   27135 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:22.048577   27135 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:22.048620   27135 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:22.048709   27135 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:22.049096   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:22.049174   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:22.064214   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0916 10:45:22.064683   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:22.065151   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:22.065184   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:22.065523   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:22.065697   27135 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:22.065870   27135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:22.065897   27135 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:22.068400   27135 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:22.068801   27135 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:22.068834   27135 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:22.069018   27135 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:22.069192   27135 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:22.069345   27135 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:22.069453   27135 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:22.148938   27135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:22.165364   27135 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:22.165396   27135 api_server.go:166] Checking apiserver status ...
	I0916 10:45:22.165449   27135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:22.183136   27135 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:22.192884   27135 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:22.192931   27135 ssh_runner.go:195] Run: ls
	I0916 10:45:22.197651   27135 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:22.201974   27135 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:22.201996   27135 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:22.202004   27135 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:22.202020   27135 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:22.202317   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:22.202351   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:22.218444   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0916 10:45:22.218933   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:22.219425   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:22.219450   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:22.219711   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:22.219888   27135 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:22.221482   27135 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:22.221497   27135 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:22.221820   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:22.221863   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:22.236858   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I0916 10:45:22.237320   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:22.237806   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:22.237833   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:22.238135   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:22.238299   27135 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:22.240934   27135 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:22.241311   27135 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:22.241344   27135 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:22.241497   27135 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:22.241825   27135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:22.241869   27135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:22.256994   27135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0916 10:45:22.257482   27135 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:22.258018   27135 main.go:141] libmachine: Using API Version  1
	I0916 10:45:22.258042   27135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:22.258358   27135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:22.258528   27135 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:22.258701   27135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:22.258719   27135 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:22.261501   27135 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:22.261969   27135 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:22.261994   27135 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:22.262136   27135 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:22.262272   27135 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:22.262406   27135 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:22.262496   27135 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:22.344754   27135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:22.361208   27135 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (3.710083053s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:25.634654   27250 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:25.634892   27250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:25.634901   27250 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:25.634905   27250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:25.635109   27250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:25.635320   27250 out.go:352] Setting JSON to false
	I0916 10:45:25.635355   27250 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:25.635452   27250 notify.go:220] Checking for updates...
	I0916 10:45:25.635833   27250 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:25.635852   27250 status.go:255] checking status of ha-244475 ...
	I0916 10:45:25.636281   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:25.636340   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:25.655433   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34975
	I0916 10:45:25.655842   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:25.656360   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:25.656375   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:25.656739   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:25.656946   27250 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:25.658662   27250 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:25.658681   27250 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:25.658987   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:25.659025   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:25.674824   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0916 10:45:25.675249   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:25.675698   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:25.675719   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:25.676019   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:25.676202   27250 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:25.678628   27250 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:25.679039   27250 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:25.679074   27250 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:25.679194   27250 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:25.679461   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:25.679506   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:25.694857   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0916 10:45:25.695284   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:25.695671   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:25.695691   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:25.696022   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:25.696157   27250 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:25.696330   27250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:25.696356   27250 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:25.699081   27250 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:25.699494   27250 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:25.699521   27250 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:25.699631   27250 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:25.699800   27250 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:25.699922   27250 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:25.700033   27250 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:25.788629   27250 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:25.794629   27250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:25.809098   27250 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:25.809155   27250 api_server.go:166] Checking apiserver status ...
	I0916 10:45:25.809200   27250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:25.824020   27250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:25.833457   27250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:25.833514   27250 ssh_runner.go:195] Run: ls
	I0916 10:45:25.837765   27250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:25.844045   27250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:25.844074   27250 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:25.844087   27250 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:25.844107   27250 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:25.844402   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:25.844436   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:25.859863   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36081
	I0916 10:45:25.860293   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:25.860775   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:25.860794   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:25.861119   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:25.861322   27250 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:25.862933   27250 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:45:25.862950   27250 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:25.863229   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:25.863260   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:25.878768   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I0916 10:45:25.879151   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:25.879664   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:25.879683   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:25.879969   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:25.880123   27250 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:45:25.882584   27250 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:25.883019   27250 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:25.883046   27250 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:25.883183   27250 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:25.883474   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:25.883506   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:25.899079   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0916 10:45:25.899473   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:25.899944   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:25.899967   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:25.900227   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:25.900380   27250 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:45:25.900563   27250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:25.900588   27250 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:45:25.903597   27250 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:25.904044   27250 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:25.904080   27250 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:25.904209   27250 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:45:25.904383   27250 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:45:25.904541   27250 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:45:25.904661   27250 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	W0916 10:45:28.953419   27250 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:28.953538   27250 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0916 10:45:28.953560   27250 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:28.953572   27250 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:45:28.953603   27250 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:28.953617   27250 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:28.954068   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:28.954128   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:28.969940   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0916 10:45:28.970324   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:28.970804   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:28.970824   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:28.971150   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:28.971323   27250 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:28.972710   27250 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:28.972726   27250 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:28.973067   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:28.973113   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:28.987673   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0916 10:45:28.988090   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:28.988597   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:28.988623   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:28.988979   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:28.989177   27250 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:28.991824   27250 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:28.992256   27250 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:28.992281   27250 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:28.992416   27250 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:28.992762   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:28.992800   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:29.007651   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0916 10:45:29.008213   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:29.008777   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:29.008800   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:29.009200   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:29.009386   27250 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:29.009555   27250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:29.009582   27250 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:29.012615   27250 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:29.013040   27250 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:29.013062   27250 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:29.013207   27250 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:29.013393   27250 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:29.013560   27250 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:29.013711   27250 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:29.093382   27250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:29.108376   27250 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:29.108416   27250 api_server.go:166] Checking apiserver status ...
	I0916 10:45:29.108458   27250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:29.124817   27250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:29.136116   27250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:29.136172   27250 ssh_runner.go:195] Run: ls
	I0916 10:45:29.140689   27250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:29.144986   27250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:29.145004   27250 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:29.145012   27250 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:29.145026   27250 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:29.145377   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:29.145418   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:29.160196   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0916 10:45:29.160659   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:29.161194   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:29.161215   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:29.161491   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:29.161663   27250 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:29.163271   27250 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:29.163297   27250 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:29.163567   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:29.163603   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:29.178134   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0916 10:45:29.178582   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:29.179039   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:29.179061   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:29.179343   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:29.179538   27250 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:29.182353   27250 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:29.182778   27250 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:29.182807   27250 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:29.182944   27250 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:29.183237   27250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:29.183282   27250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:29.200177   27250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0916 10:45:29.200604   27250 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:29.201043   27250 main.go:141] libmachine: Using API Version  1
	I0916 10:45:29.201068   27250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:29.201390   27250 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:29.201611   27250 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:29.201780   27250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:29.201803   27250 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:29.204537   27250 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:29.204995   27250 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:29.205013   27250 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:29.205195   27250 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:29.205333   27250 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:29.205454   27250 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:29.205568   27250 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:29.288634   27250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:29.303336   27250 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (3.751141576s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:34.054031   27350 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:34.054150   27350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:34.054159   27350 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:34.054163   27350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:34.054320   27350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:34.054473   27350 out.go:352] Setting JSON to false
	I0916 10:45:34.054512   27350 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:34.054558   27350 notify.go:220] Checking for updates...
	I0916 10:45:34.054998   27350 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:34.055021   27350 status.go:255] checking status of ha-244475 ...
	I0916 10:45:34.055592   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:34.055627   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:34.073787   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0916 10:45:34.074227   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:34.074887   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:34.074921   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:34.075278   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:34.075444   27350 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:34.077016   27350 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:34.077030   27350 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:34.077358   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:34.077395   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:34.092277   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0916 10:45:34.092750   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:34.093263   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:34.093300   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:34.093581   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:34.093748   27350 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:34.096706   27350 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:34.097227   27350 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:34.097261   27350 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:34.097370   27350 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:34.097670   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:34.097704   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:34.112457   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0916 10:45:34.112936   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:34.113477   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:34.113512   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:34.113879   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:34.114061   27350 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:34.114265   27350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:34.114291   27350 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:34.117434   27350 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:34.117891   27350 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:34.117932   27350 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:34.118183   27350 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:34.118355   27350 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:34.118501   27350 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:34.118640   27350 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:34.205717   27350 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:34.212053   27350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:34.229786   27350 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:34.229832   27350 api_server.go:166] Checking apiserver status ...
	I0916 10:45:34.229877   27350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:34.244287   27350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:34.254681   27350 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:34.254728   27350 ssh_runner.go:195] Run: ls
	I0916 10:45:34.259546   27350 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:34.263744   27350 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:34.263763   27350 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:34.263773   27350 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:34.263788   27350 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:34.264071   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:34.264101   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:34.279058   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0916 10:45:34.279412   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:34.279825   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:34.279848   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:34.280184   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:34.280351   27350 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:34.281788   27350 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:45:34.281801   27350 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:34.282068   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:34.282114   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:34.297542   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
	I0916 10:45:34.297914   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:34.298351   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:34.298375   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:34.298679   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:34.298854   27350 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:45:34.301820   27350 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:34.302219   27350 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:34.302246   27350 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:34.302311   27350 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:45:34.302591   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:34.302638   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:34.317782   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0916 10:45:34.318232   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:34.318662   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:34.318682   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:34.318987   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:34.319141   27350 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:45:34.319282   27350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:34.319303   27350 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:45:34.322011   27350 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:34.322406   27350 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:45:34.322424   27350 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:45:34.322556   27350 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:45:34.322700   27350 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:45:34.322806   27350 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:45:34.322936   27350 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	W0916 10:45:37.405403   27350 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0916 10:45:37.405495   27350 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0916 10:45:37.405508   27350 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:37.405514   27350 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:45:37.405546   27350 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0916 10:45:37.405559   27350 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:37.405986   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:37.406042   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:37.421047   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0916 10:45:37.421443   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:37.421932   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:37.421953   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:37.422254   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:37.422445   27350 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:37.424074   27350 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:37.424092   27350 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:37.424384   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:37.424417   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:37.439426   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44353
	I0916 10:45:37.439934   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:37.440509   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:37.440535   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:37.440878   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:37.441064   27350 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:37.443924   27350 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:37.444294   27350 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:37.444315   27350 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:37.444485   27350 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:37.444904   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:37.444947   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:37.459595   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0916 10:45:37.459992   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:37.460418   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:37.460437   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:37.460722   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:37.460911   27350 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:37.461154   27350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:37.461177   27350 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:37.463859   27350 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:37.464262   27350 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:37.464295   27350 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:37.464474   27350 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:37.464681   27350 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:37.464836   27350 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:37.464992   27350 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:37.547506   27350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:37.565978   27350 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:37.566008   27350 api_server.go:166] Checking apiserver status ...
	I0916 10:45:37.566059   27350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:37.582253   27350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:37.596895   27350 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:37.596944   27350 ssh_runner.go:195] Run: ls
	I0916 10:45:37.601785   27350 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:37.606914   27350 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:37.606935   27350 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:37.606943   27350 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:37.606957   27350 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:37.607243   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:37.607280   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:37.621893   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0916 10:45:37.622352   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:37.622840   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:37.622860   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:37.623145   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:37.623320   27350 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:37.624719   27350 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:37.624738   27350 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:37.625026   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:37.625058   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:37.639919   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I0916 10:45:37.640401   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:37.640873   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:37.640894   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:37.641214   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:37.641405   27350 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:37.644298   27350 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:37.644715   27350 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:37.644745   27350 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:37.644906   27350 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:37.645223   27350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:37.645260   27350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:37.660691   27350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0916 10:45:37.661108   27350 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:37.661582   27350 main.go:141] libmachine: Using API Version  1
	I0916 10:45:37.661611   27350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:37.661949   27350 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:37.662112   27350 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:37.662289   27350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:37.662314   27350 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:37.664915   27350 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:37.665391   27350 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:37.665416   27350 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:37.665597   27350 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:37.665768   27350 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:37.665891   27350 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:37.665990   27350 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:37.748289   27350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:37.762763   27350 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 7 (623.134883ms)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:43.923779   27487 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:43.924156   27487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:43.924170   27487 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:43.924177   27487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:43.924697   27487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:43.925002   27487 out.go:352] Setting JSON to false
	I0916 10:45:43.925180   27487 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:43.925262   27487 notify.go:220] Checking for updates...
	I0916 10:45:43.925749   27487 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:43.925771   27487 status.go:255] checking status of ha-244475 ...
	I0916 10:45:43.926174   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:43.926211   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:43.941374   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0916 10:45:43.941763   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:43.942242   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:43.942263   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:43.942658   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:43.942858   27487 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:43.944530   27487 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:43.944546   27487 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:43.944856   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:43.944905   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:43.959950   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43239
	I0916 10:45:43.960338   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:43.960784   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:43.960802   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:43.961074   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:43.961249   27487 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:43.963808   27487 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:43.964234   27487 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:43.964260   27487 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:43.964399   27487 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:43.964716   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:43.964758   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:43.979536   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0916 10:45:43.979914   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:43.980329   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:43.980347   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:43.980618   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:43.980802   27487 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:43.980975   27487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:43.980999   27487 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:43.983551   27487 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:43.984021   27487 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:43.984044   27487 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:43.984194   27487 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:43.984377   27487 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:43.984528   27487 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:43.984657   27487 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:44.074383   27487 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:44.081527   27487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:44.097704   27487 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:44.097745   27487 api_server.go:166] Checking apiserver status ...
	I0916 10:45:44.097787   27487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:44.114677   27487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:44.126231   27487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:44.126296   27487 ssh_runner.go:195] Run: ls
	I0916 10:45:44.132697   27487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:44.139917   27487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:44.139941   27487 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:44.139953   27487 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:44.139972   27487 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:44.140250   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.140292   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.155092   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I0916 10:45:44.155456   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.155930   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.155951   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.156323   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.156527   27487 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:44.158021   27487 status.go:330] ha-244475-m02 host status = "Stopped" (err=<nil>)
	I0916 10:45:44.158036   27487 status.go:343] host is not running, skipping remaining checks
	I0916 10:45:44.158043   27487 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:44.158074   27487 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:44.158462   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.158513   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.173272   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0916 10:45:44.173781   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.174267   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.174293   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.174644   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.174822   27487 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:44.176638   27487 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:44.176655   27487 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:44.177070   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.177105   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.191703   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0916 10:45:44.192078   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.192581   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.192605   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.192909   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.193139   27487 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:44.196054   27487 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:44.196459   27487 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:44.196485   27487 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:44.196618   27487 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:44.196928   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.196971   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.211707   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I0916 10:45:44.212203   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.212754   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.212786   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.213148   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.213374   27487 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:44.213593   27487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:44.213623   27487 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:44.216329   27487 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:44.216904   27487 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:44.216929   27487 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:44.217091   27487 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:44.217268   27487 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:44.217415   27487 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:44.217543   27487 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:44.300960   27487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:44.317297   27487 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:44.317331   27487 api_server.go:166] Checking apiserver status ...
	I0916 10:45:44.317385   27487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:44.330771   27487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:44.340071   27487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:44.340123   27487 ssh_runner.go:195] Run: ls
	I0916 10:45:44.344184   27487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:44.349316   27487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:44.349334   27487 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:44.349342   27487 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:44.349357   27487 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:44.349616   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.349662   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.364292   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0916 10:45:44.364681   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.365221   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.365243   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.365556   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.365746   27487 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:44.367151   27487 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:44.367169   27487 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:44.367502   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.367542   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.381931   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
	I0916 10:45:44.382423   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.382878   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.382902   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.383236   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.383426   27487 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:44.386234   27487 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:44.386620   27487 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:44.386651   27487 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:44.386746   27487 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:44.387048   27487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:44.387092   27487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:44.401888   27487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35325
	I0916 10:45:44.402275   27487 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:44.402770   27487 main.go:141] libmachine: Using API Version  1
	I0916 10:45:44.402796   27487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:44.403092   27487 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:44.403260   27487 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:44.403429   27487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:44.403450   27487 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:44.406311   27487 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:44.406749   27487 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:44.406773   27487 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:44.406959   27487 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:44.407094   27487 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:44.407220   27487 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:44.407342   27487 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:44.488928   27487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:44.503508   27487 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 7 (632.298478ms)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:49.400296   27590 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:49.400545   27590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:49.400555   27590 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:49.400559   27590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:49.400745   27590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:49.400901   27590 out.go:352] Setting JSON to false
	I0916 10:45:49.400930   27590 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:49.401049   27590 notify.go:220] Checking for updates...
	I0916 10:45:49.401514   27590 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:49.401536   27590 status.go:255] checking status of ha-244475 ...
	I0916 10:45:49.402022   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.402063   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.420348   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0916 10:45:49.420848   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.421467   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.421490   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.421870   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.422056   27590 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:49.423646   27590 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:49.423665   27590 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:49.424121   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.424169   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.439272   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0916 10:45:49.439737   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.440242   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.440270   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.440621   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.440807   27590 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:49.443378   27590 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:49.443806   27590 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:49.443844   27590 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:49.443962   27590 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:49.444275   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.444326   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.459514   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40491
	I0916 10:45:49.459994   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.460509   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.460532   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.460839   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.461020   27590 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:49.461207   27590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:49.461226   27590 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:49.463885   27590 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:49.464367   27590 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:49.464399   27590 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:49.464545   27590 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:49.464716   27590 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:49.464843   27590 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:49.464962   27590 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:49.554014   27590 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:49.561652   27590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:49.577754   27590 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:49.577792   27590 api_server.go:166] Checking apiserver status ...
	I0916 10:45:49.577826   27590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:49.598969   27590 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:49.608919   27590 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:49.608977   27590 ssh_runner.go:195] Run: ls
	I0916 10:45:49.614370   27590 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:49.618320   27590 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:49.618339   27590 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:49.618348   27590 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:49.618372   27590 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:49.618701   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.618737   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.633585   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0916 10:45:49.634028   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.634530   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.634558   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.634855   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.635073   27590 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:49.636442   27590 status.go:330] ha-244475-m02 host status = "Stopped" (err=<nil>)
	I0916 10:45:49.636455   27590 status.go:343] host is not running, skipping remaining checks
	I0916 10:45:49.636462   27590 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:49.636494   27590 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:49.636793   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.636836   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.651598   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I0916 10:45:49.652063   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.652552   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.652575   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.652871   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.653068   27590 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:49.654460   27590 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:49.654477   27590 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:49.654763   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.654799   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.670172   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I0916 10:45:49.670638   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.671123   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.671143   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.671487   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.671671   27590 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:49.674576   27590 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:49.674954   27590 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:49.674982   27590 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:49.675100   27590 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:49.675400   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.675442   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.691654   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I0916 10:45:49.692011   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.692415   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.692433   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.692769   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.692991   27590 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:49.693225   27590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:49.693247   27590 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:49.695972   27590 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:49.696389   27590 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:49.696417   27590 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:49.696562   27590 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:49.696713   27590 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:49.696865   27590 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:49.696967   27590 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:49.777419   27590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:49.792965   27590 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:49.792993   27590 api_server.go:166] Checking apiserver status ...
	I0916 10:45:49.793027   27590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:49.808431   27590 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:49.819112   27590 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:49.819163   27590 ssh_runner.go:195] Run: ls
	I0916 10:45:49.823687   27590 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:49.827880   27590 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:49.827900   27590 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:49.827908   27590 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:49.827923   27590 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:49.828214   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.828244   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.843731   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46019
	I0916 10:45:49.844156   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.844631   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.844659   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.844952   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.845112   27590 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:49.846531   27590 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:49.846547   27590 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:49.846820   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.846852   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.861860   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0916 10:45:49.862300   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.862777   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.862795   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.863079   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.863247   27590 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:49.866080   27590 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:49.866493   27590 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:49.866518   27590 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:49.866641   27590 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:49.866997   27590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:49.867036   27590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:49.882368   27590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I0916 10:45:49.882884   27590 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:49.883353   27590 main.go:141] libmachine: Using API Version  1
	I0916 10:45:49.883377   27590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:49.883708   27590 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:49.883886   27590 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:49.884047   27590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:49.884070   27590 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:49.886987   27590 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:49.887424   27590 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:49.887453   27590 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:49.887558   27590 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:49.887732   27590 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:49.887895   27590 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:49.888027   27590 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:45:49.972721   27590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:49.990043   27590 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 7 (627.614115ms)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-244475-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:59.478369   27693 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:59.478470   27693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:59.478475   27693 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:59.478479   27693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:59.478651   27693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:45:59.478823   27693 out.go:352] Setting JSON to false
	I0916 10:45:59.478852   27693 mustload.go:65] Loading cluster: ha-244475
	I0916 10:45:59.478883   27693 notify.go:220] Checking for updates...
	I0916 10:45:59.479300   27693 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:45:59.479315   27693 status.go:255] checking status of ha-244475 ...
	I0916 10:45:59.479798   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.479856   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.498215   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0916 10:45:59.498658   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.499296   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.499318   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.499880   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.500123   27693 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:45:59.502133   27693 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:45:59.502151   27693 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:59.502598   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.502669   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.519488   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
	I0916 10:45:59.519934   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.520402   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.520425   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.520691   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.520868   27693 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:45:59.523626   27693 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:59.524051   27693 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:59.524084   27693 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:59.524249   27693 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:45:59.524554   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.524596   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.539613   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0916 10:45:59.540079   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.540669   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.540696   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.541013   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.541195   27693 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:45:59.541385   27693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:59.541410   27693 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:45:59.544570   27693 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:59.545177   27693 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:45:59.545214   27693 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:45:59.545377   27693 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:45:59.545544   27693 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:45:59.545685   27693 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:45:59.545797   27693 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:45:59.637411   27693 ssh_runner.go:195] Run: systemctl --version
	I0916 10:45:59.643744   27693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:59.658657   27693 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:59.658692   27693 api_server.go:166] Checking apiserver status ...
	I0916 10:45:59.658735   27693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:59.673050   27693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup
	W0916 10:45:59.684089   27693 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1130/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:59.684159   27693 ssh_runner.go:195] Run: ls
	I0916 10:45:59.688677   27693 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:59.693603   27693 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:59.693633   27693 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:45:59.693653   27693 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:59.693669   27693 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:45:59.693955   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.693993   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.709984   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0916 10:45:59.710425   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.710913   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.710933   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.711226   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.711421   27693 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:45:59.713011   27693 status.go:330] ha-244475-m02 host status = "Stopped" (err=<nil>)
	I0916 10:45:59.713029   27693 status.go:343] host is not running, skipping remaining checks
	I0916 10:45:59.713050   27693 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:59.713073   27693 status.go:255] checking status of ha-244475-m03 ...
	I0916 10:45:59.713494   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.713545   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.728290   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0916 10:45:59.728747   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.729322   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.729346   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.729704   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.729919   27693 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:45:59.731561   27693 status.go:330] ha-244475-m03 host status = "Running" (err=<nil>)
	I0916 10:45:59.731576   27693 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:59.731861   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.731897   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.746783   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0916 10:45:59.747270   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.747714   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.747731   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.748001   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.748168   27693 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:45:59.750997   27693 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:59.751434   27693 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:59.751462   27693 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:59.751586   27693 host.go:66] Checking if "ha-244475-m03" exists ...
	I0916 10:45:59.751879   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.751924   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.766593   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0916 10:45:59.767074   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.767571   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.767595   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.767878   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.768068   27693 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:45:59.768203   27693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:59.768221   27693 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:45:59.771013   27693 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:59.771489   27693 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:45:59.771518   27693 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:45:59.771617   27693 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:45:59.771769   27693 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:45:59.771897   27693 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:45:59.772007   27693 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:45:59.849257   27693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:59.866972   27693 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:45:59.866998   27693 api_server.go:166] Checking apiserver status ...
	I0916 10:45:59.867030   27693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:59.881676   27693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0916 10:45:59.894505   27693 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:59.894563   27693 ssh_runner.go:195] Run: ls
	I0916 10:45:59.899189   27693 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:45:59.903199   27693 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:45:59.903218   27693 status.go:422] ha-244475-m03 apiserver status = Running (err=<nil>)
	I0916 10:45:59.903226   27693 status.go:257] ha-244475-m03 status: &{Name:ha-244475-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:45:59.903241   27693 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:45:59.903510   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.903540   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.918260   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0916 10:45:59.918629   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.919105   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.919124   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.919431   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.919615   27693 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:45:59.921095   27693 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:45:59.921109   27693 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:59.921448   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.921488   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.935958   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0916 10:45:59.936401   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.936881   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.936905   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.937193   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.937372   27693 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:45:59.939917   27693 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:59.940300   27693 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:59.940322   27693 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:59.940393   27693 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:45:59.940662   27693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:45:59.940696   27693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:45:59.955219   27693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I0916 10:45:59.955613   27693 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:45:59.956077   27693 main.go:141] libmachine: Using API Version  1
	I0916 10:45:59.956095   27693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:45:59.956344   27693 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:45:59.956515   27693 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:45:59.956690   27693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:59.956723   27693 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:45:59.959507   27693 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:59.959885   27693 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:45:59.959907   27693 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:45:59.960031   27693 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:45:59.960186   27693 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:45:59.960315   27693 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:45:59.960437   27693 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:46:00.045730   27693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:46:00.063386   27693 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.423377746s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m03_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m04 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp testdata/cp-test.txt                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m03 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-244475 node stop m02 -v=7                                                     | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-244475 node start m02 -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:38:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:38:12.200712   22121 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:38:12.200823   22121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:38:12.200832   22121 out.go:358] Setting ErrFile to fd 2...
	I0916 10:38:12.200836   22121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:38:12.201073   22121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:38:12.201666   22121 out.go:352] Setting JSON to false
	I0916 10:38:12.202552   22121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1242,"bootTime":1726481850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:38:12.202649   22121 start.go:139] virtualization: kvm guest
	I0916 10:38:12.204909   22121 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:38:12.206153   22121 notify.go:220] Checking for updates...
	I0916 10:38:12.206162   22121 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:38:12.207508   22121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:38:12.208635   22121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:12.209868   22121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.211054   22121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:38:12.212157   22121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:38:12.213282   22121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:38:12.247704   22121 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:38:12.248934   22121 start.go:297] selected driver: kvm2
	I0916 10:38:12.248946   22121 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:38:12.248965   22121 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:38:12.249634   22121 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:12.249717   22121 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:38:12.264515   22121 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:38:12.264557   22121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:38:12.264783   22121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:38:12.264813   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:12.264852   22121 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:38:12.264862   22121 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:38:12.264904   22121 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:12.264991   22121 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:38:12.266715   22121 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:38:12.267811   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:12.267865   22121 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:38:12.267877   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:12.267958   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:12.267971   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:12.268264   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:12.268287   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json: {Name:mk850b432e3492662a38e4b0f11a836bf86e02aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:12.268433   22121 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:38:12.268468   22121 start.go:364] duration metric: took 18.641µs to acquireMachinesLock for "ha-244475"
	I0916 10:38:12.268490   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:12.268553   22121 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:38:12.270059   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:38:12.270184   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:12.270223   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:12.284586   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0916 10:38:12.285055   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:12.285574   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:12.285594   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:12.285978   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:12.286124   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:12.286277   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:12.286414   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:38:12.286438   22121 client.go:168] LocalClient.Create starting
	I0916 10:38:12.286467   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:38:12.286500   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:12.286515   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:12.286575   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:38:12.286594   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:12.286606   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:12.286627   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:38:12.286639   22121 main.go:141] libmachine: (ha-244475) Calling .PreCreateCheck
	I0916 10:38:12.286973   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:12.287297   22121 main.go:141] libmachine: Creating machine...
	I0916 10:38:12.287309   22121 main.go:141] libmachine: (ha-244475) Calling .Create
	I0916 10:38:12.287457   22121 main.go:141] libmachine: (ha-244475) Creating KVM machine...
	I0916 10:38:12.288681   22121 main.go:141] libmachine: (ha-244475) DBG | found existing default KVM network
	I0916 10:38:12.289333   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.289200   22144 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I0916 10:38:12.289353   22121 main.go:141] libmachine: (ha-244475) DBG | created network xml: 
	I0916 10:38:12.289365   22121 main.go:141] libmachine: (ha-244475) DBG | <network>
	I0916 10:38:12.289372   22121 main.go:141] libmachine: (ha-244475) DBG |   <name>mk-ha-244475</name>
	I0916 10:38:12.289384   22121 main.go:141] libmachine: (ha-244475) DBG |   <dns enable='no'/>
	I0916 10:38:12.289392   22121 main.go:141] libmachine: (ha-244475) DBG |   
	I0916 10:38:12.289404   22121 main.go:141] libmachine: (ha-244475) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:38:12.289414   22121 main.go:141] libmachine: (ha-244475) DBG |     <dhcp>
	I0916 10:38:12.289426   22121 main.go:141] libmachine: (ha-244475) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:38:12.289440   22121 main.go:141] libmachine: (ha-244475) DBG |     </dhcp>
	I0916 10:38:12.289470   22121 main.go:141] libmachine: (ha-244475) DBG |   </ip>
	I0916 10:38:12.289491   22121 main.go:141] libmachine: (ha-244475) DBG |   
	I0916 10:38:12.289503   22121 main.go:141] libmachine: (ha-244475) DBG | </network>
	I0916 10:38:12.289512   22121 main.go:141] libmachine: (ha-244475) DBG | 
	I0916 10:38:12.294272   22121 main.go:141] libmachine: (ha-244475) DBG | trying to create private KVM network mk-ha-244475 192.168.39.0/24...
	I0916 10:38:12.356537   22121 main.go:141] libmachine: (ha-244475) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 ...
	I0916 10:38:12.356564   22121 main.go:141] libmachine: (ha-244475) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:38:12.356583   22121 main.go:141] libmachine: (ha-244475) DBG | private KVM network mk-ha-244475 192.168.39.0/24 created
	I0916 10:38:12.356612   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.356478   22144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.356634   22121 main.go:141] libmachine: (ha-244475) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:38:12.603819   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.603693   22144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa...
	I0916 10:38:12.714132   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.713994   22144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/ha-244475.rawdisk...
	I0916 10:38:12.714162   22121 main.go:141] libmachine: (ha-244475) DBG | Writing magic tar header
	I0916 10:38:12.714174   22121 main.go:141] libmachine: (ha-244475) DBG | Writing SSH key tar header
	I0916 10:38:12.714185   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:12.714123   22144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 ...
	I0916 10:38:12.714208   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475
	I0916 10:38:12.714276   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475 (perms=drwx------)
	I0916 10:38:12.714299   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:38:12.714310   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:38:12.714346   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:38:12.714364   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:12.714379   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:38:12.714393   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:38:12.714412   22121 main.go:141] libmachine: (ha-244475) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:38:12.714424   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:38:12.714456   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:38:12.714472   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:38:12.714480   22121 main.go:141] libmachine: (ha-244475) Creating domain...
	I0916 10:38:12.714493   22121 main.go:141] libmachine: (ha-244475) DBG | Checking permissions on dir: /home
	I0916 10:38:12.714503   22121 main.go:141] libmachine: (ha-244475) DBG | Skipping /home - not owner
	I0916 10:38:12.715516   22121 main.go:141] libmachine: (ha-244475) define libvirt domain using xml: 
	I0916 10:38:12.715535   22121 main.go:141] libmachine: (ha-244475) <domain type='kvm'>
	I0916 10:38:12.715541   22121 main.go:141] libmachine: (ha-244475)   <name>ha-244475</name>
	I0916 10:38:12.715549   22121 main.go:141] libmachine: (ha-244475)   <memory unit='MiB'>2200</memory>
	I0916 10:38:12.715560   22121 main.go:141] libmachine: (ha-244475)   <vcpu>2</vcpu>
	I0916 10:38:12.715567   22121 main.go:141] libmachine: (ha-244475)   <features>
	I0916 10:38:12.715594   22121 main.go:141] libmachine: (ha-244475)     <acpi/>
	I0916 10:38:12.715613   22121 main.go:141] libmachine: (ha-244475)     <apic/>
	I0916 10:38:12.715643   22121 main.go:141] libmachine: (ha-244475)     <pae/>
	I0916 10:38:12.715667   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.715677   22121 main.go:141] libmachine: (ha-244475)   </features>
	I0916 10:38:12.715691   22121 main.go:141] libmachine: (ha-244475)   <cpu mode='host-passthrough'>
	I0916 10:38:12.715701   22121 main.go:141] libmachine: (ha-244475)   
	I0916 10:38:12.715709   22121 main.go:141] libmachine: (ha-244475)   </cpu>
	I0916 10:38:12.715717   22121 main.go:141] libmachine: (ha-244475)   <os>
	I0916 10:38:12.715726   22121 main.go:141] libmachine: (ha-244475)     <type>hvm</type>
	I0916 10:38:12.715737   22121 main.go:141] libmachine: (ha-244475)     <boot dev='cdrom'/>
	I0916 10:38:12.715746   22121 main.go:141] libmachine: (ha-244475)     <boot dev='hd'/>
	I0916 10:38:12.715758   22121 main.go:141] libmachine: (ha-244475)     <bootmenu enable='no'/>
	I0916 10:38:12.715788   22121 main.go:141] libmachine: (ha-244475)   </os>
	I0916 10:38:12.715799   22121 main.go:141] libmachine: (ha-244475)   <devices>
	I0916 10:38:12.715810   22121 main.go:141] libmachine: (ha-244475)     <disk type='file' device='cdrom'>
	I0916 10:38:12.715840   22121 main.go:141] libmachine: (ha-244475)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/boot2docker.iso'/>
	I0916 10:38:12.715852   22121 main.go:141] libmachine: (ha-244475)       <target dev='hdc' bus='scsi'/>
	I0916 10:38:12.715861   22121 main.go:141] libmachine: (ha-244475)       <readonly/>
	I0916 10:38:12.715870   22121 main.go:141] libmachine: (ha-244475)     </disk>
	I0916 10:38:12.715875   22121 main.go:141] libmachine: (ha-244475)     <disk type='file' device='disk'>
	I0916 10:38:12.715881   22121 main.go:141] libmachine: (ha-244475)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:38:12.715891   22121 main.go:141] libmachine: (ha-244475)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/ha-244475.rawdisk'/>
	I0916 10:38:12.715896   22121 main.go:141] libmachine: (ha-244475)       <target dev='hda' bus='virtio'/>
	I0916 10:38:12.715903   22121 main.go:141] libmachine: (ha-244475)     </disk>
	I0916 10:38:12.715907   22121 main.go:141] libmachine: (ha-244475)     <interface type='network'>
	I0916 10:38:12.715914   22121 main.go:141] libmachine: (ha-244475)       <source network='mk-ha-244475'/>
	I0916 10:38:12.715918   22121 main.go:141] libmachine: (ha-244475)       <model type='virtio'/>
	I0916 10:38:12.715925   22121 main.go:141] libmachine: (ha-244475)     </interface>
	I0916 10:38:12.715929   22121 main.go:141] libmachine: (ha-244475)     <interface type='network'>
	I0916 10:38:12.715936   22121 main.go:141] libmachine: (ha-244475)       <source network='default'/>
	I0916 10:38:12.715941   22121 main.go:141] libmachine: (ha-244475)       <model type='virtio'/>
	I0916 10:38:12.715946   22121 main.go:141] libmachine: (ha-244475)     </interface>
	I0916 10:38:12.715950   22121 main.go:141] libmachine: (ha-244475)     <serial type='pty'>
	I0916 10:38:12.715966   22121 main.go:141] libmachine: (ha-244475)       <target port='0'/>
	I0916 10:38:12.715977   22121 main.go:141] libmachine: (ha-244475)     </serial>
	I0916 10:38:12.715987   22121 main.go:141] libmachine: (ha-244475)     <console type='pty'>
	I0916 10:38:12.715998   22121 main.go:141] libmachine: (ha-244475)       <target type='serial' port='0'/>
	I0916 10:38:12.716016   22121 main.go:141] libmachine: (ha-244475)     </console>
	I0916 10:38:12.716026   22121 main.go:141] libmachine: (ha-244475)     <rng model='virtio'>
	I0916 10:38:12.716036   22121 main.go:141] libmachine: (ha-244475)       <backend model='random'>/dev/random</backend>
	I0916 10:38:12.716045   22121 main.go:141] libmachine: (ha-244475)     </rng>
	I0916 10:38:12.716065   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.716082   22121 main.go:141] libmachine: (ha-244475)     
	I0916 10:38:12.716090   22121 main.go:141] libmachine: (ha-244475)   </devices>
	I0916 10:38:12.716101   22121 main.go:141] libmachine: (ha-244475) </domain>
	I0916 10:38:12.716111   22121 main.go:141] libmachine: (ha-244475) 
	I0916 10:38:12.720528   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:4e:1b:22 in network default
	I0916 10:38:12.721005   22121 main.go:141] libmachine: (ha-244475) Ensuring networks are active...
	I0916 10:38:12.721018   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:12.721698   22121 main.go:141] libmachine: (ha-244475) Ensuring network default is active
	I0916 10:38:12.722026   22121 main.go:141] libmachine: (ha-244475) Ensuring network mk-ha-244475 is active
	I0916 10:38:12.722616   22121 main.go:141] libmachine: (ha-244475) Getting domain xml...
	I0916 10:38:12.723368   22121 main.go:141] libmachine: (ha-244475) Creating domain...
	I0916 10:38:13.892889   22121 main.go:141] libmachine: (ha-244475) Waiting to get IP...
	I0916 10:38:13.893726   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:13.894130   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:13.894170   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:13.894127   22144 retry.go:31] will retry after 194.671276ms: waiting for machine to come up
	I0916 10:38:14.090477   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.090800   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.090825   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.090753   22144 retry.go:31] will retry after 351.659131ms: waiting for machine to come up
	I0916 10:38:14.444409   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.444864   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.444896   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.444830   22144 retry.go:31] will retry after 382.219059ms: waiting for machine to come up
	I0916 10:38:14.828362   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:14.828800   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:14.828826   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:14.828748   22144 retry.go:31] will retry after 385.017595ms: waiting for machine to come up
	I0916 10:38:15.215350   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:15.215732   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:15.215758   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:15.215688   22144 retry.go:31] will retry after 603.255872ms: waiting for machine to come up
	I0916 10:38:15.820323   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:15.820668   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:15.820694   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:15.820630   22144 retry.go:31] will retry after 768.911433ms: waiting for machine to come up
	I0916 10:38:16.591945   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:16.592337   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:16.592361   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:16.592300   22144 retry.go:31] will retry after 1.01448771s: waiting for machine to come up
	I0916 10:38:17.607844   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:17.608259   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:17.608281   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:17.608225   22144 retry.go:31] will retry after 1.028283296s: waiting for machine to come up
	I0916 10:38:18.638495   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:18.638879   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:18.638909   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:18.638842   22144 retry.go:31] will retry after 1.806716733s: waiting for machine to come up
	I0916 10:38:20.447563   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:20.447961   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:20.447980   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:20.447880   22144 retry.go:31] will retry after 2.186647075s: waiting for machine to come up
	I0916 10:38:22.636294   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:22.636702   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:22.636728   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:22.636657   22144 retry.go:31] will retry after 2.089501385s: waiting for machine to come up
	I0916 10:38:24.728099   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:24.728486   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:24.728515   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:24.728423   22144 retry.go:31] will retry after 2.189050091s: waiting for machine to come up
	I0916 10:38:26.918420   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:26.918845   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:26.918870   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:26.918800   22144 retry.go:31] will retry after 2.857721999s: waiting for machine to come up
	I0916 10:38:29.779219   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:29.779636   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find current IP address of domain ha-244475 in network mk-ha-244475
	I0916 10:38:29.779664   22121 main.go:141] libmachine: (ha-244475) DBG | I0916 10:38:29.779599   22144 retry.go:31] will retry after 5.359183826s: waiting for machine to come up
	I0916 10:38:35.141883   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.142271   22121 main.go:141] libmachine: (ha-244475) Found IP for machine: 192.168.39.19
	I0916 10:38:35.142292   22121 main.go:141] libmachine: (ha-244475) Reserving static IP address...
	I0916 10:38:35.142311   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has current primary IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.142733   22121 main.go:141] libmachine: (ha-244475) DBG | unable to find host DHCP lease matching {name: "ha-244475", mac: "52:54:00:31:d1:43", ip: "192.168.39.19"} in network mk-ha-244475
	I0916 10:38:35.214446   22121 main.go:141] libmachine: (ha-244475) DBG | Getting to WaitForSSH function...
	I0916 10:38:35.214471   22121 main.go:141] libmachine: (ha-244475) Reserved static IP address: 192.168.39.19
	I0916 10:38:35.214482   22121 main.go:141] libmachine: (ha-244475) Waiting for SSH to be available...
	I0916 10:38:35.216924   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.217367   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.217394   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.217529   22121 main.go:141] libmachine: (ha-244475) DBG | Using SSH client type: external
	I0916 10:38:35.217557   22121 main.go:141] libmachine: (ha-244475) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa (-rw-------)
	I0916 10:38:35.217585   22121 main.go:141] libmachine: (ha-244475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:38:35.217594   22121 main.go:141] libmachine: (ha-244475) DBG | About to run SSH command:
	I0916 10:38:35.217608   22121 main.go:141] libmachine: (ha-244475) DBG | exit 0
	I0916 10:38:35.349373   22121 main.go:141] libmachine: (ha-244475) DBG | SSH cmd err, output: <nil>: 
	I0916 10:38:35.349683   22121 main.go:141] libmachine: (ha-244475) KVM machine creation complete!
	I0916 10:38:35.349969   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:35.350496   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:35.350688   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:35.350823   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:38:35.350834   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:35.351935   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:38:35.351949   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:38:35.351954   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:38:35.351959   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.353913   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.354208   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.354235   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.354319   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.354463   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.354605   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.354695   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.354841   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.355041   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.355053   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:38:35.464485   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:35.464507   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:38:35.464514   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.467101   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.467423   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.467458   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.467566   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.467765   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.467917   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.468144   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.468285   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.468476   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.468489   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:38:35.582051   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:38:35.582131   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:38:35.582143   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:38:35.582154   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.582407   22121 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:38:35.582432   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.582675   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.585276   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.585633   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.585660   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.585766   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.585943   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.586081   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.586209   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.586353   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.586554   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.586566   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:38:35.712268   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:38:35.712302   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.715043   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.715376   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.715404   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.715689   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.715894   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.716072   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.716203   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.716355   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:35.716526   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:35.716543   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:38:35.838701   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:38:35.838734   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:38:35.838786   22121 buildroot.go:174] setting up certificates
	I0916 10:38:35.838795   22121 provision.go:84] configureAuth start
	I0916 10:38:35.838807   22121 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:38:35.839053   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:35.842260   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.842666   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.842713   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.842874   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.845198   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.845480   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.845503   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.845681   22121 provision.go:143] copyHostCerts
	I0916 10:38:35.845727   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:38:35.845766   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:38:35.845777   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:38:35.845857   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:38:35.845945   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:38:35.845971   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:38:35.845975   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:38:35.846004   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:38:35.846056   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:38:35.846073   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:38:35.846079   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:38:35.846099   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:38:35.846153   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:38:35.972514   22121 provision.go:177] copyRemoteCerts
	I0916 10:38:35.972572   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:38:35.972592   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:35.975467   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.975802   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:35.975829   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:35.976035   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:35.976192   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:35.976307   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:35.976395   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.064079   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:38:36.064162   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:38:36.088374   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:38:36.088445   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:38:36.112864   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:38:36.112943   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:38:36.137799   22121 provision.go:87] duration metric: took 298.990788ms to configureAuth
	I0916 10:38:36.137824   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:38:36.137990   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:36.138068   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.140775   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.141141   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.141167   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.141370   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.141557   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.141711   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.141862   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.142012   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:36.142173   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:36.142190   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:38:36.366260   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:38:36.366288   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:38:36.366297   22121 main.go:141] libmachine: (ha-244475) Calling .GetURL
	I0916 10:38:36.367546   22121 main.go:141] libmachine: (ha-244475) DBG | Using libvirt version 6000000
	I0916 10:38:36.369543   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.369862   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.369884   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.370034   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:38:36.370047   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:38:36.370054   22121 client.go:171] duration metric: took 24.083609722s to LocalClient.Create
	I0916 10:38:36.370077   22121 start.go:167] duration metric: took 24.083661787s to libmachine.API.Create "ha-244475"
	I0916 10:38:36.370089   22121 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:38:36.370118   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:38:36.370140   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.370345   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:38:36.370363   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.372350   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.372637   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.372658   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.372800   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.372958   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.373108   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.373239   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.459818   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:38:36.464279   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:38:36.464304   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:38:36.464360   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:38:36.464428   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:38:36.464436   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:38:36.464531   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:38:36.474459   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:38:36.498853   22121 start.go:296] duration metric: took 128.751453ms for postStartSetup
	I0916 10:38:36.498905   22121 main.go:141] libmachine: (ha-244475) Calling .GetConfigRaw
	I0916 10:38:36.499551   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:36.502104   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.502435   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.502456   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.502764   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:36.502952   22121 start.go:128] duration metric: took 24.234389874s to createHost
	I0916 10:38:36.502971   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.505214   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.505496   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.505513   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.505660   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.505815   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.505951   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.506052   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.506165   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:38:36.506383   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:38:36.506406   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:38:36.618115   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483116.595653625
	
	I0916 10:38:36.618143   22121 fix.go:216] guest clock: 1726483116.595653625
	I0916 10:38:36.618151   22121 fix.go:229] Guest: 2024-09-16 10:38:36.595653625 +0000 UTC Remote: 2024-09-16 10:38:36.502962795 +0000 UTC m=+24.335728547 (delta=92.69083ms)
	I0916 10:38:36.618190   22121 fix.go:200] guest clock delta is within tolerance: 92.69083ms
	I0916 10:38:36.618197   22121 start.go:83] releasing machines lock for "ha-244475", held for 24.349718291s
	I0916 10:38:36.618226   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.618490   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:36.621177   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.621552   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.621576   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.621715   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622182   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622349   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:36.622457   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:38:36.622504   22121 ssh_runner.go:195] Run: cat /version.json
	I0916 10:38:36.622532   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.622507   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:36.625311   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625336   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625701   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.625729   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625752   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:36.625773   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:36.625849   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.625996   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:36.626070   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.626190   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.626226   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:36.626304   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:36.626347   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.626412   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:36.731813   22121 ssh_runner.go:195] Run: systemctl --version
	I0916 10:38:36.738034   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:38:36.897823   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:38:36.903947   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:38:36.904037   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:38:36.920981   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:38:36.921002   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:38:36.921062   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:38:36.936473   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:38:36.950885   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:38:36.950937   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:38:36.965062   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:38:36.979049   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:38:37.089419   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:38:37.234470   22121 docker.go:233] disabling docker service ...
	I0916 10:38:37.234570   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:38:37.249643   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:38:37.263395   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:38:37.396923   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:38:37.530822   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:38:37.545513   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:38:37.564576   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:38:37.564639   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.575771   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:38:37.575830   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.586212   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.597160   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.607962   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:38:37.619040   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.630000   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.647480   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:38:37.658746   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:38:37.668801   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:38:37.668864   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:38:37.683050   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:38:37.693269   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:37.804210   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:38:37.895246   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:38:37.895322   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:38:37.900048   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:38:37.900102   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:38:37.903675   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:38:37.941447   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:38:37.941534   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:38:37.969936   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:38:38.002089   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:38:38.003428   22121 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:38:38.006180   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:38.006490   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:38.006513   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:38.006728   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:38:38.011175   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:38.024444   22121 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:38:38.024541   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:38.024583   22121 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:38:38.057652   22121 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 10:38:38.057726   22121 ssh_runner.go:195] Run: which lz4
	I0916 10:38:38.061778   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 10:38:38.061885   22121 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:38:38.066142   22121 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:38:38.066169   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 10:38:39.414979   22121 crio.go:462] duration metric: took 1.353135329s to copy over tarball
	I0916 10:38:39.415060   22121 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:38:41.361544   22121 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.94645378s)
	I0916 10:38:41.361572   22121 crio.go:469] duration metric: took 1.946564398s to extract the tarball
	I0916 10:38:41.361580   22121 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:38:41.398599   22121 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:38:41.443342   22121 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:38:41.443365   22121 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:38:41.443372   22121 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:38:41.443503   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:38:41.443571   22121 ssh_runner.go:195] Run: crio config
	I0916 10:38:41.489336   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:41.489363   22121 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:38:41.489374   22121 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:38:41.489401   22121 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:38:41.489526   22121 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:38:41.489548   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:38:41.489586   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:38:41.505696   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:38:41.505807   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:38:41.505873   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:38:41.516304   22121 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:38:41.516364   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:38:41.525992   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:38:41.542448   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:38:41.558743   22121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:38:41.575779   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 10:38:41.592567   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:38:41.596480   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:38:41.608839   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:38:41.718297   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:38:41.736212   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:38:41.736238   22121 certs.go:194] generating shared ca certs ...
	I0916 10:38:41.736259   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.736446   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:38:41.736500   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:38:41.736517   22121 certs.go:256] generating profile certs ...
	I0916 10:38:41.736581   22121 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:38:41.736604   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt with IP's: []
	I0916 10:38:41.887766   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt ...
	I0916 10:38:41.887792   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt: {Name:mkeee24c57991a4cf2957d59b85c7dbd3c8f2331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.887965   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key ...
	I0916 10:38:41.887976   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key: {Name:mkec5e765e721654d343964b8e5f1903226a6b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:41.888056   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6
	I0916 10:38:41.888070   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I0916 10:38:42.038292   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 ...
	I0916 10:38:42.038321   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6: {Name:mk7099a2c62f50aa06662b965a0c9069ae5d1f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.038481   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6 ...
	I0916 10:38:42.038493   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6: {Name:mkcc105b422dfe70444931267745dbca1edf49bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.038566   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.c43f27e6 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:38:42.038652   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.c43f27e6 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:38:42.038706   22121 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:38:42.038720   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt with IP's: []
	I0916 10:38:42.190304   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt ...
	I0916 10:38:42.190334   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt: {Name:mk8f534095f1a4c3c0f97ea592b35a6ed96cf75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.190493   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key ...
	I0916 10:38:42.190504   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key: {Name:mkb1fc3820bed6bb42a1e04c6b2b6ddfc43271a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:42.190577   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:38:42.190595   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:38:42.190607   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:38:42.190620   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:38:42.190630   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:38:42.190643   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:38:42.190653   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:38:42.190665   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:38:42.190709   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:38:42.190745   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:38:42.190754   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:38:42.190774   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:38:42.190818   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:38:42.190848   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:38:42.190886   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:38:42.190919   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.190932   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.190944   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.191452   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:38:42.217887   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:38:42.242446   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:38:42.266461   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:38:42.289939   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:38:42.313172   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:38:42.337118   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:38:42.360742   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:38:42.383602   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:38:42.406581   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:38:42.429672   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:38:42.452865   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:38:42.469058   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:38:42.474734   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:38:42.485883   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.490265   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.490308   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:38:42.495983   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:38:42.510198   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:38:42.521298   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.527236   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.527293   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:38:42.533552   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:38:42.549332   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:38:42.561819   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.568456   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.568516   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:38:42.575583   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:38:42.586818   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:38:42.590763   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:38:42.590815   22121 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:38:42.590883   22121 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:38:42.590943   22121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:38:42.628496   22121 cri.go:89] found id: ""
	I0916 10:38:42.628553   22121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:38:42.638691   22121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:38:42.648671   22121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:38:42.658424   22121 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:38:42.658444   22121 kubeadm.go:157] found existing configuration files:
	
	I0916 10:38:42.658483   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:38:42.667543   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:38:42.667594   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:38:42.677200   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:38:42.686120   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:38:42.686169   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:38:42.695575   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:38:42.704585   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:38:42.704673   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:38:42.714549   22121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:38:42.723658   22121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:38:42.723715   22121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:38:42.733164   22121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:38:42.842015   22121 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:38:42.842090   22121 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:38:42.961804   22121 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:38:42.961936   22121 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:38:42.962041   22121 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:38:42.973403   22121 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:38:42.975286   22121 out.go:235]   - Generating certificates and keys ...
	I0916 10:38:42.975379   22121 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:38:42.975457   22121 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:38:43.030083   22121 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:38:43.295745   22121 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:38:43.465239   22121 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:38:43.533050   22121 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:38:43.596361   22121 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:38:43.596500   22121 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-244475 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0916 10:38:43.798754   22121 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:38:43.798893   22121 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-244475 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0916 10:38:43.873275   22121 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:38:44.075110   22121 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:38:44.129628   22121 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:38:44.129726   22121 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:38:44.322901   22121 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:38:44.558047   22121 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:38:44.903170   22121 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:38:45.001802   22121 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:38:45.146307   22121 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:38:45.146914   22121 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:38:45.150330   22121 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:38:45.152199   22121 out.go:235]   - Booting up control plane ...
	I0916 10:38:45.152314   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:38:45.152406   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:38:45.152956   22121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:38:45.168296   22121 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:38:45.176973   22121 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:38:45.177059   22121 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:38:45.314163   22121 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:38:45.314301   22121 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:38:45.816204   22121 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.333685ms
	I0916 10:38:45.816311   22121 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:38:51.792476   22121 kubeadm.go:310] [api-check] The API server is healthy after 5.978803709s
	I0916 10:38:51.807629   22121 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:38:51.827911   22121 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:38:51.862228   22121 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:38:51.862446   22121 kubeadm.go:310] [mark-control-plane] Marking the node ha-244475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:38:51.880371   22121 kubeadm.go:310] [bootstrap-token] Using token: z03lik.8myj2g1lawnpsxwz
	I0916 10:38:51.881728   22121 out.go:235]   - Configuring RBAC rules ...
	I0916 10:38:51.881867   22121 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:38:51.892035   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:38:51.905643   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:38:51.910644   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:38:51.914471   22121 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:38:51.919085   22121 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:38:52.199036   22121 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:38:52.641913   22121 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:38:53.198817   22121 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:38:53.200731   22121 kubeadm.go:310] 
	I0916 10:38:53.200796   22121 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:38:53.200801   22121 kubeadm.go:310] 
	I0916 10:38:53.200897   22121 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:38:53.200923   22121 kubeadm.go:310] 
	I0916 10:38:53.200967   22121 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:38:53.201048   22121 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:38:53.201151   22121 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:38:53.201169   22121 kubeadm.go:310] 
	I0916 10:38:53.201241   22121 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:38:53.201252   22121 kubeadm.go:310] 
	I0916 10:38:53.201327   22121 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:38:53.201342   22121 kubeadm.go:310] 
	I0916 10:38:53.201417   22121 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:38:53.201524   22121 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:38:53.201620   22121 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:38:53.201636   22121 kubeadm.go:310] 
	I0916 10:38:53.201729   22121 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:38:53.201854   22121 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:38:53.201865   22121 kubeadm.go:310] 
	I0916 10:38:53.201980   22121 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z03lik.8myj2g1lawnpsxwz \
	I0916 10:38:53.202117   22121 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 10:38:53.202140   22121 kubeadm.go:310] 	--control-plane 
	I0916 10:38:53.202144   22121 kubeadm.go:310] 
	I0916 10:38:53.202267   22121 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:38:53.202284   22121 kubeadm.go:310] 
	I0916 10:38:53.202396   22121 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z03lik.8myj2g1lawnpsxwz \
	I0916 10:38:53.202519   22121 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 10:38:53.204612   22121 kubeadm.go:310] W0916 10:38:42.823368     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:38:53.204909   22121 kubeadm.go:310] W0916 10:38:42.824196     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:38:53.205016   22121 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:38:53.205039   22121 cni.go:84] Creating CNI manager for ""
	I0916 10:38:53.205046   22121 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:38:53.206707   22121 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:38:53.207859   22121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:38:53.213780   22121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:38:53.213797   22121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:38:53.232952   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:38:53.644721   22121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:38:53.644772   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:53.644775   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475 minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=true
	I0916 10:38:53.828940   22121 ops.go:34] apiserver oom_adj: -16
	I0916 10:38:53.829033   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:54.329149   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:54.829567   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:55.329641   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:55.829630   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:56.329847   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:56.829468   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:57.329221   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:38:57.464394   22121 kubeadm.go:1113] duration metric: took 3.819679278s to wait for elevateKubeSystemPrivileges
	I0916 10:38:57.464429   22121 kubeadm.go:394] duration metric: took 14.873616788s to StartCluster
	I0916 10:38:57.464458   22121 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:57.464557   22121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:57.465226   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:38:57.465443   22121 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:57.465469   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:38:57.465470   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:38:57.465485   22121 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:38:57.465569   22121 addons.go:69] Setting storage-provisioner=true in profile "ha-244475"
	I0916 10:38:57.465585   22121 addons.go:69] Setting default-storageclass=true in profile "ha-244475"
	I0916 10:38:57.465603   22121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-244475"
	I0916 10:38:57.465609   22121 addons.go:234] Setting addon storage-provisioner=true in "ha-244475"
	I0916 10:38:57.465634   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:38:57.465683   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:57.466032   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.466071   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.466075   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.466116   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.481103   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0916 10:38:57.481138   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0916 10:38:57.481582   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.481618   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.482091   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.482118   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.482234   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.482258   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.482437   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.482607   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.482769   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.483070   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.483111   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.484929   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:38:57.485193   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:38:57.485590   22121 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:38:57.485818   22121 addons.go:234] Setting addon default-storageclass=true in "ha-244475"
	I0916 10:38:57.485861   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:38:57.486134   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.486172   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.498299   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0916 10:38:57.498828   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.499447   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.499474   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.499850   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.500054   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.500552   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0916 10:38:57.500918   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.501427   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.501446   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.501839   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:57.501908   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.502610   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:57.502657   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:57.503651   22121 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:38:57.504966   22121 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:38:57.504987   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:38:57.505003   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:57.508156   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.508589   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:57.508615   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.508829   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:57.508992   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:57.509171   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:57.509294   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:57.518682   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0916 10:38:57.519147   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:57.519675   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:57.519702   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:57.520007   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:57.520169   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:38:57.521733   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:38:57.521948   22121 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:38:57.521971   22121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:38:57.521995   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:38:57.524943   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.525414   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:38:57.525441   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:38:57.525578   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:38:57.525724   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:38:57.525845   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:38:57.525926   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:38:57.660884   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:38:57.725204   22121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:38:57.781501   22121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:38:58.313582   22121 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:38:58.587280   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587305   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587383   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587408   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587584   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587596   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587649   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587677   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587686   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587689   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587706   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587679   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.587713   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587722   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.587906   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.587935   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.587948   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.587979   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.588055   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.588073   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.588171   22121 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:38:58.588199   22121 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:38:58.588294   22121 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:38:58.588300   22121 round_trippers.go:469] Request Headers:
	I0916 10:38:58.588310   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.588315   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.605995   22121 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0916 10:38:58.606551   22121 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:38:58.606569   22121 round_trippers.go:469] Request Headers:
	I0916 10:38:58.606579   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:38:58.606584   22121 round_trippers.go:473]     Content-Type: application/json
	I0916 10:38:58.606587   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:38:58.610730   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:38:58.610908   22121 main.go:141] libmachine: Making call to close driver server
	I0916 10:38:58.610929   22121 main.go:141] libmachine: (ha-244475) Calling .Close
	I0916 10:38:58.611167   22121 main.go:141] libmachine: (ha-244475) DBG | Closing plugin on server side
	I0916 10:38:58.611207   22121 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:38:58.611219   22121 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:38:58.612831   22121 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:38:58.614176   22121 addons.go:510] duration metric: took 1.1486947s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:38:58.614214   22121 start.go:246] waiting for cluster config update ...
	I0916 10:38:58.614228   22121 start.go:255] writing updated cluster config ...
	I0916 10:38:58.615876   22121 out.go:201] 
	I0916 10:38:58.617218   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:38:58.617303   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:58.618897   22121 out.go:177] * Starting "ha-244475-m02" control-plane node in "ha-244475" cluster
	I0916 10:38:58.620429   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:38:58.620447   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:38:58.620539   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:38:58.620553   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:38:58.620632   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:38:58.620820   22121 start.go:360] acquireMachinesLock for ha-244475-m02: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:38:58.620867   22121 start.go:364] duration metric: took 27.412µs to acquireMachinesLock for "ha-244475-m02"
	I0916 10:38:58.620892   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:38:58.620984   22121 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 10:38:58.622503   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:38:58.622584   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:38:58.622615   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:38:58.638413   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0916 10:38:58.638950   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:38:58.639464   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:38:58.639492   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:38:58.639818   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:38:58.640042   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:38:58.640214   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:38:58.640380   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:38:58.640411   22121 client.go:168] LocalClient.Create starting
	I0916 10:38:58.640444   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:38:58.640482   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:58.640501   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:58.640575   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:38:58.640600   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:38:58.640616   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:38:58.640639   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:38:58.640650   22121 main.go:141] libmachine: (ha-244475-m02) Calling .PreCreateCheck
	I0916 10:38:58.640820   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:38:58.641229   22121 main.go:141] libmachine: Creating machine...
	I0916 10:38:58.641245   22121 main.go:141] libmachine: (ha-244475-m02) Calling .Create
	I0916 10:38:58.641375   22121 main.go:141] libmachine: (ha-244475-m02) Creating KVM machine...
	I0916 10:38:58.642569   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found existing default KVM network
	I0916 10:38:58.642747   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found existing private KVM network mk-ha-244475
	I0916 10:38:58.642926   22121 main.go:141] libmachine: (ha-244475-m02) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 ...
	I0916 10:38:58.642950   22121 main.go:141] libmachine: (ha-244475-m02) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:38:58.643021   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.642905   22483 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:58.643109   22121 main.go:141] libmachine: (ha-244475-m02) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:38:58.883746   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.883623   22483 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa...
	I0916 10:38:58.990233   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.990092   22483 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/ha-244475-m02.rawdisk...
	I0916 10:38:58.990284   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Writing magic tar header
	I0916 10:38:58.990302   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Writing SSH key tar header
	I0916 10:38:58.990319   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:38:58.990203   22483 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 ...
	I0916 10:38:58.990329   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02 (perms=drwx------)
	I0916 10:38:58.990341   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02
	I0916 10:38:58.990351   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:38:58.990359   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:38:58.990365   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:38:58.990378   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:38:58.990388   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:38:58.990411   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:38:58.990419   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:38:58.990427   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Checking permissions on dir: /home
	I0916 10:38:58.990435   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Skipping /home - not owner
	I0916 10:38:58.990446   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:38:58.990454   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:38:58.990465   22121 main.go:141] libmachine: (ha-244475-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:38:58.990475   22121 main.go:141] libmachine: (ha-244475-m02) Creating domain...
	I0916 10:38:58.991326   22121 main.go:141] libmachine: (ha-244475-m02) define libvirt domain using xml: 
	I0916 10:38:58.991351   22121 main.go:141] libmachine: (ha-244475-m02) <domain type='kvm'>
	I0916 10:38:58.991380   22121 main.go:141] libmachine: (ha-244475-m02)   <name>ha-244475-m02</name>
	I0916 10:38:58.991401   22121 main.go:141] libmachine: (ha-244475-m02)   <memory unit='MiB'>2200</memory>
	I0916 10:38:58.991408   22121 main.go:141] libmachine: (ha-244475-m02)   <vcpu>2</vcpu>
	I0916 10:38:58.991417   22121 main.go:141] libmachine: (ha-244475-m02)   <features>
	I0916 10:38:58.991441   22121 main.go:141] libmachine: (ha-244475-m02)     <acpi/>
	I0916 10:38:58.991459   22121 main.go:141] libmachine: (ha-244475-m02)     <apic/>
	I0916 10:38:58.991465   22121 main.go:141] libmachine: (ha-244475-m02)     <pae/>
	I0916 10:38:58.991472   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991477   22121 main.go:141] libmachine: (ha-244475-m02)   </features>
	I0916 10:38:58.991482   22121 main.go:141] libmachine: (ha-244475-m02)   <cpu mode='host-passthrough'>
	I0916 10:38:58.991489   22121 main.go:141] libmachine: (ha-244475-m02)   
	I0916 10:38:58.991504   22121 main.go:141] libmachine: (ha-244475-m02)   </cpu>
	I0916 10:38:58.991512   22121 main.go:141] libmachine: (ha-244475-m02)   <os>
	I0916 10:38:58.991516   22121 main.go:141] libmachine: (ha-244475-m02)     <type>hvm</type>
	I0916 10:38:58.991523   22121 main.go:141] libmachine: (ha-244475-m02)     <boot dev='cdrom'/>
	I0916 10:38:58.991528   22121 main.go:141] libmachine: (ha-244475-m02)     <boot dev='hd'/>
	I0916 10:38:58.991535   22121 main.go:141] libmachine: (ha-244475-m02)     <bootmenu enable='no'/>
	I0916 10:38:58.991546   22121 main.go:141] libmachine: (ha-244475-m02)   </os>
	I0916 10:38:58.991554   22121 main.go:141] libmachine: (ha-244475-m02)   <devices>
	I0916 10:38:58.991559   22121 main.go:141] libmachine: (ha-244475-m02)     <disk type='file' device='cdrom'>
	I0916 10:38:58.991569   22121 main.go:141] libmachine: (ha-244475-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/boot2docker.iso'/>
	I0916 10:38:58.991574   22121 main.go:141] libmachine: (ha-244475-m02)       <target dev='hdc' bus='scsi'/>
	I0916 10:38:58.991581   22121 main.go:141] libmachine: (ha-244475-m02)       <readonly/>
	I0916 10:38:58.991585   22121 main.go:141] libmachine: (ha-244475-m02)     </disk>
	I0916 10:38:58.991590   22121 main.go:141] libmachine: (ha-244475-m02)     <disk type='file' device='disk'>
	I0916 10:38:58.991596   22121 main.go:141] libmachine: (ha-244475-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:38:58.991603   22121 main.go:141] libmachine: (ha-244475-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/ha-244475-m02.rawdisk'/>
	I0916 10:38:58.991611   22121 main.go:141] libmachine: (ha-244475-m02)       <target dev='hda' bus='virtio'/>
	I0916 10:38:58.991615   22121 main.go:141] libmachine: (ha-244475-m02)     </disk>
	I0916 10:38:58.991620   22121 main.go:141] libmachine: (ha-244475-m02)     <interface type='network'>
	I0916 10:38:58.991625   22121 main.go:141] libmachine: (ha-244475-m02)       <source network='mk-ha-244475'/>
	I0916 10:38:58.991630   22121 main.go:141] libmachine: (ha-244475-m02)       <model type='virtio'/>
	I0916 10:38:58.991637   22121 main.go:141] libmachine: (ha-244475-m02)     </interface>
	I0916 10:38:58.991643   22121 main.go:141] libmachine: (ha-244475-m02)     <interface type='network'>
	I0916 10:38:58.991649   22121 main.go:141] libmachine: (ha-244475-m02)       <source network='default'/>
	I0916 10:38:58.991655   22121 main.go:141] libmachine: (ha-244475-m02)       <model type='virtio'/>
	I0916 10:38:58.991658   22121 main.go:141] libmachine: (ha-244475-m02)     </interface>
	I0916 10:38:58.991663   22121 main.go:141] libmachine: (ha-244475-m02)     <serial type='pty'>
	I0916 10:38:58.991667   22121 main.go:141] libmachine: (ha-244475-m02)       <target port='0'/>
	I0916 10:38:58.991672   22121 main.go:141] libmachine: (ha-244475-m02)     </serial>
	I0916 10:38:58.991681   22121 main.go:141] libmachine: (ha-244475-m02)     <console type='pty'>
	I0916 10:38:58.991692   22121 main.go:141] libmachine: (ha-244475-m02)       <target type='serial' port='0'/>
	I0916 10:38:58.991703   22121 main.go:141] libmachine: (ha-244475-m02)     </console>
	I0916 10:38:58.991728   22121 main.go:141] libmachine: (ha-244475-m02)     <rng model='virtio'>
	I0916 10:38:58.991756   22121 main.go:141] libmachine: (ha-244475-m02)       <backend model='random'>/dev/random</backend>
	I0916 10:38:58.991766   22121 main.go:141] libmachine: (ha-244475-m02)     </rng>
	I0916 10:38:58.991772   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991779   22121 main.go:141] libmachine: (ha-244475-m02)     
	I0916 10:38:58.991792   22121 main.go:141] libmachine: (ha-244475-m02)   </devices>
	I0916 10:38:58.991801   22121 main.go:141] libmachine: (ha-244475-m02) </domain>
	I0916 10:38:58.991810   22121 main.go:141] libmachine: (ha-244475-m02) 
	I0916 10:38:58.998246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:b1:66:ac in network default
	I0916 10:38:58.998886   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring networks are active...
	I0916 10:38:58.998906   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:38:58.999650   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring network default is active
	I0916 10:38:59.000011   22121 main.go:141] libmachine: (ha-244475-m02) Ensuring network mk-ha-244475 is active
	I0916 10:38:59.000423   22121 main.go:141] libmachine: (ha-244475-m02) Getting domain xml...
	I0916 10:38:59.001200   22121 main.go:141] libmachine: (ha-244475-m02) Creating domain...
	I0916 10:39:00.217897   22121 main.go:141] libmachine: (ha-244475-m02) Waiting to get IP...
	I0916 10:39:00.218668   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.219076   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.219122   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.219065   22483 retry.go:31] will retry after 199.814892ms: waiting for machine to come up
	I0916 10:39:00.420559   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.421001   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.421022   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.420966   22483 retry.go:31] will retry after 240.671684ms: waiting for machine to come up
	I0916 10:39:00.663384   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:00.663824   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:00.663846   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:00.663767   22483 retry.go:31] will retry after 337.97981ms: waiting for machine to come up
	I0916 10:39:01.003494   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:01.003942   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:01.003971   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:01.003897   22483 retry.go:31] will retry after 519.568797ms: waiting for machine to come up
	I0916 10:39:01.524619   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:01.525114   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:01.525169   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:01.525043   22483 retry.go:31] will retry after 742.703365ms: waiting for machine to come up
	I0916 10:39:02.268894   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:02.269275   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:02.269302   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:02.269246   22483 retry.go:31] will retry after 918.427714ms: waiting for machine to come up
	I0916 10:39:03.189424   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:03.189835   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:03.189858   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:03.189810   22483 retry.go:31] will retry after 1.026136416s: waiting for machine to come up
	I0916 10:39:04.217246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:04.217734   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:04.217759   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:04.217669   22483 retry.go:31] will retry after 1.280806759s: waiting for machine to come up
	I0916 10:39:05.500057   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:05.500485   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:05.500513   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:05.500426   22483 retry.go:31] will retry after 1.764059222s: waiting for machine to come up
	I0916 10:39:07.266224   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:07.266648   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:07.266668   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:07.266605   22483 retry.go:31] will retry after 1.834210088s: waiting for machine to come up
	I0916 10:39:09.102726   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:09.103221   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:09.103251   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:09.103165   22483 retry.go:31] will retry after 2.739410036s: waiting for machine to come up
	I0916 10:39:11.846017   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:11.846530   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:11.846564   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:11.846474   22483 retry.go:31] will retry after 2.779311539s: waiting for machine to come up
	I0916 10:39:14.627940   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:14.628351   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:14.628379   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:14.628315   22483 retry.go:31] will retry after 2.793801544s: waiting for machine to come up
	I0916 10:39:17.425154   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:17.425563   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find current IP address of domain ha-244475-m02 in network mk-ha-244475
	I0916 10:39:17.425580   22121 main.go:141] libmachine: (ha-244475-m02) DBG | I0916 10:39:17.425530   22483 retry.go:31] will retry after 3.470690334s: waiting for machine to come up
	I0916 10:39:20.899627   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.900073   22121 main.go:141] libmachine: (ha-244475-m02) Found IP for machine: 192.168.39.222
	I0916 10:39:20.900093   22121 main.go:141] libmachine: (ha-244475-m02) Reserving static IP address...
	I0916 10:39:20.900106   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.900473   22121 main.go:141] libmachine: (ha-244475-m02) DBG | unable to find host DHCP lease matching {name: "ha-244475-m02", mac: "52:54:00:ed:fc:95", ip: "192.168.39.222"} in network mk-ha-244475
	I0916 10:39:20.972758   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Getting to WaitForSSH function...
	I0916 10:39:20.972786   22121 main.go:141] libmachine: (ha-244475-m02) Reserved static IP address: 192.168.39.222
	I0916 10:39:20.972795   22121 main.go:141] libmachine: (ha-244475-m02) Waiting for SSH to be available...
	I0916 10:39:20.975117   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.975582   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:20.975610   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:20.975773   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using SSH client type: external
	I0916 10:39:20.975792   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa (-rw-------)
	I0916 10:39:20.975827   22121 main.go:141] libmachine: (ha-244475-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:39:20.975839   22121 main.go:141] libmachine: (ha-244475-m02) DBG | About to run SSH command:
	I0916 10:39:20.975859   22121 main.go:141] libmachine: (ha-244475-m02) DBG | exit 0
	I0916 10:39:21.101388   22121 main.go:141] libmachine: (ha-244475-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 10:39:21.101625   22121 main.go:141] libmachine: (ha-244475-m02) KVM machine creation complete!
	I0916 10:39:21.101972   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:39:21.102551   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:21.102707   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:21.102833   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:39:21.102843   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:39:21.103989   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:39:21.104000   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:39:21.104005   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:39:21.104010   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.106164   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.106508   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.106551   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.106712   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.106893   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.107044   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.107170   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.107317   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.107566   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.107579   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:39:21.208324   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:39:21.208347   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:39:21.208354   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.211146   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.211537   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.211559   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.211725   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.211895   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.212034   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.212154   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.212326   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.212516   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.212530   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:39:21.313838   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:39:21.313941   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:39:21.313956   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:39:21.313968   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.314202   22121 buildroot.go:166] provisioning hostname "ha-244475-m02"
	I0916 10:39:21.314225   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.314348   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.316988   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.317383   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.317407   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.317573   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.317722   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.317830   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.317925   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.318068   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.318243   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.318255   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475-m02 && echo "ha-244475-m02" | sudo tee /etc/hostname
	I0916 10:39:21.435511   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475-m02
	
	I0916 10:39:21.435550   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.438718   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.439163   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.439205   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.439382   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.439582   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.439737   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.439947   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.440129   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.440341   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.440367   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:39:21.550458   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:39:21.550490   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:39:21.550529   22121 buildroot.go:174] setting up certificates
	I0916 10:39:21.550538   22121 provision.go:84] configureAuth start
	I0916 10:39:21.550547   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetMachineName
	I0916 10:39:21.550825   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:21.553187   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.553518   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.553543   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.553719   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.555867   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.556227   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.556254   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.556377   22121 provision.go:143] copyHostCerts
	I0916 10:39:21.556404   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:39:21.556435   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:39:21.556445   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:39:21.556501   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:39:21.557003   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:39:21.557062   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:39:21.557069   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:39:21.557114   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:39:21.557194   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:39:21.557215   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:39:21.557221   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:39:21.557251   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:39:21.557313   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475-m02 san=[127.0.0.1 192.168.39.222 ha-244475-m02 localhost minikube]
	I0916 10:39:21.676307   22121 provision.go:177] copyRemoteCerts
	I0916 10:39:21.676359   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:39:21.676383   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.679208   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.679543   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.679570   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.679736   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.679929   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.680073   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.680198   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:21.759911   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:39:21.759973   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:39:21.784754   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:39:21.784831   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:39:21.808848   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:39:21.808934   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:39:21.832713   22121 provision.go:87] duration metric: took 282.161069ms to configureAuth
	I0916 10:39:21.832745   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:39:21.832966   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:21.833035   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:21.835844   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.836194   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:21.836220   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:21.836405   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:21.836587   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.836747   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:21.836869   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:21.836973   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:21.837163   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:21.837187   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:39:22.055982   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:39:22.056004   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:39:22.056012   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetURL
	I0916 10:39:22.057317   22121 main.go:141] libmachine: (ha-244475-m02) DBG | Using libvirt version 6000000
	I0916 10:39:22.059932   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.060270   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.060291   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.060472   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:39:22.060481   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:39:22.060487   22121 client.go:171] duration metric: took 23.42006819s to LocalClient.Create
	I0916 10:39:22.060508   22121 start.go:167] duration metric: took 23.420129046s to libmachine.API.Create "ha-244475"
	I0916 10:39:22.060521   22121 start.go:293] postStartSetup for "ha-244475-m02" (driver="kvm2")
	I0916 10:39:22.060537   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:39:22.060553   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.060804   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:39:22.060831   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.062903   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.063181   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.063208   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.063341   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.063491   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.063705   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.063813   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.145615   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:39:22.150644   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:39:22.150671   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:39:22.150732   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:39:22.150808   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:39:22.150817   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:39:22.150906   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:39:22.162177   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:39:22.188876   22121 start.go:296] duration metric: took 128.339893ms for postStartSetup
	I0916 10:39:22.188928   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetConfigRaw
	I0916 10:39:22.189609   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:22.191896   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.192212   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.192246   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.192461   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:39:22.192662   22121 start.go:128] duration metric: took 23.571667259s to createHost
	I0916 10:39:22.192687   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.194553   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.194806   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.194832   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.194956   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.195125   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.195252   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.195352   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.195512   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:39:22.195697   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0916 10:39:22.195714   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:39:22.298260   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483162.257238661
	
	I0916 10:39:22.298294   22121 fix.go:216] guest clock: 1726483162.257238661
	I0916 10:39:22.298303   22121 fix.go:229] Guest: 2024-09-16 10:39:22.257238661 +0000 UTC Remote: 2024-09-16 10:39:22.192675095 +0000 UTC m=+70.025440848 (delta=64.563566ms)
	I0916 10:39:22.298325   22121 fix.go:200] guest clock delta is within tolerance: 64.563566ms
	I0916 10:39:22.298332   22121 start.go:83] releasing machines lock for "ha-244475-m02", held for 23.677456654s
	I0916 10:39:22.298361   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.298605   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:22.301224   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.301602   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.301623   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.303467   22121 out.go:177] * Found network options:
	I0916 10:39:22.304869   22121 out.go:177]   - NO_PROXY=192.168.39.19
	W0916 10:39:22.306210   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:39:22.306239   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.306761   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.306940   22121 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:39:22.307022   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:39:22.307050   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	W0916 10:39:22.307076   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:39:22.307148   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:39:22.307170   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:39:22.309796   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.309995   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310175   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.310201   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310319   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.310427   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:22.310453   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:22.310476   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.310594   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:39:22.310660   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.310713   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:39:22.310788   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.310823   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:39:22.310950   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:39:22.543814   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:39:22.550133   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:39:22.550202   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:39:22.567275   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:39:22.567305   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:39:22.567376   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:39:22.584656   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:39:22.599498   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:39:22.599566   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:39:22.614104   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:39:22.628372   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:39:22.744286   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:39:22.898472   22121 docker.go:233] disabling docker service ...
	I0916 10:39:22.898553   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:39:22.913618   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:39:22.927202   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:39:23.051522   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:39:23.182181   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:39:23.204179   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:39:23.225362   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:39:23.225448   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.237074   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:39:23.237150   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.247895   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.258393   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.269419   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:39:23.279779   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.291172   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.311053   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:39:23.322116   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:39:23.332200   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:39:23.332250   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:39:23.344994   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:39:23.355782   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:23.481218   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:39:23.579230   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:39:23.579298   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:39:23.584697   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:39:23.584741   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:39:23.588596   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:39:23.641205   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:39:23.641281   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:39:23.671177   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:39:23.702253   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:39:23.703479   22121 out.go:177]   - env NO_PROXY=192.168.39.19
	I0916 10:39:23.704928   22121 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:39:23.707459   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:23.707795   22121 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:39:13 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:39:23.707824   22121 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:39:23.708043   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:39:23.712363   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:39:23.725265   22121 mustload.go:65] Loading cluster: ha-244475
	I0916 10:39:23.725441   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:23.725687   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:23.725721   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:23.740417   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0916 10:39:23.740990   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:23.741466   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:23.741488   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:23.741810   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:23.742008   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:39:23.743510   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:39:23.743856   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:23.743896   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:23.759264   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I0916 10:39:23.759649   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:23.760026   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:23.760042   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:23.760318   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:23.760486   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:39:23.760651   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.222
	I0916 10:39:23.760665   22121 certs.go:194] generating shared ca certs ...
	I0916 10:39:23.760682   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.760796   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:39:23.760834   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:39:23.760847   22121 certs.go:256] generating profile certs ...
	I0916 10:39:23.760915   22121 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:39:23.760938   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a
	I0916 10:39:23.760949   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.254]
	I0916 10:39:23.971738   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a ...
	I0916 10:39:23.971765   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a: {Name:mk37a27280aa796084417d4aec0944fb7177392b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.971967   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a ...
	I0916 10:39:23.971985   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a: {Name:mkb5d769612983e338b6def0cc291fa133a3ff90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:39:23.972081   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.2ecb3d3a -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:39:23.972210   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.2ecb3d3a -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:39:23.972334   22121 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:39:23.972348   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:39:23.972360   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:39:23.972373   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:39:23.972388   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:39:23.972400   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:39:23.972412   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:39:23.972424   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:39:23.972437   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:39:23.972477   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:39:23.972504   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:39:23.972513   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:39:23.972536   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:39:23.972556   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:39:23.972577   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:39:23.972612   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:39:23.972638   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:39:23.972651   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:39:23.972663   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:23.972694   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:39:23.975828   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:23.976221   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:39:23.976248   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:23.976413   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:39:23.976620   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:39:23.976774   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:39:23.976882   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:39:24.053497   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:39:24.058424   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:39:24.070223   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:39:24.074933   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 10:39:24.085348   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:39:24.089709   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:39:24.102091   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:39:24.106076   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:39:24.123270   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:39:24.127635   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:39:24.138409   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:39:24.142528   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:39:24.158176   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:39:24.183770   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:39:24.210708   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:39:24.237895   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:39:24.265068   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:39:24.289021   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:39:24.312480   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:39:24.336502   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:39:24.360309   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:39:24.383990   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:39:24.408205   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:39:24.432243   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:39:24.449793   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 10:39:24.467290   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:39:24.484273   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:39:24.501648   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:39:24.519020   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:39:24.535943   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:39:24.552390   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:39:24.558138   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:39:24.568860   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.574154   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.574204   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:39:24.580119   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:39:24.592339   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:39:24.604511   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.609097   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.609171   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:39:24.615026   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:39:24.625768   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:39:24.636379   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.640871   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.640920   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:39:24.646395   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:39:24.656801   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:39:24.661571   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:39:24.661615   22121 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0916 10:39:24.661689   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:39:24.661712   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:39:24.661745   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:39:24.679303   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:39:24.679364   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:39:24.679410   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:39:24.689055   22121 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:39:24.689100   22121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:39:24.698937   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:39:24.698963   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:39:24.699025   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:39:24.699054   22121 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 10:39:24.699062   22121 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 10:39:24.703600   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 10:39:24.703633   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:39:25.360517   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:39:25.360604   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:39:25.365737   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 10:39:25.365769   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:39:25.520604   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:39:25.561216   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:39:25.561328   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:39:25.578620   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 10:39:25.578664   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:39:25.943225   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:39:25.953425   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:39:25.971005   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:39:25.987923   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:39:26.005037   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:39:26.008989   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:39:26.022651   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:26.139506   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:39:26.156924   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:39:26.157320   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:39:26.157358   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:39:26.173843   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41439
	I0916 10:39:26.174382   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:39:26.174982   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:39:26.175008   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:39:26.175329   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:39:26.175507   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:39:26.175651   22121 start.go:317] joinCluster: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:39:26.175759   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:39:26.175773   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:39:26.178960   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:26.179415   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:39:26.179439   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:39:26.179692   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:39:26.179878   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:39:26.180020   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:39:26.180170   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:39:26.331689   22121 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:39:26.331744   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvzo4h.p3o4vz89426q0tzd --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0916 10:39:46.581278   22121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvzo4h.p3o4vz89426q0tzd --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (20.249509056s)
	I0916 10:39:46.581311   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:39:47.185857   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475-m02 minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=false
	I0916 10:39:47.323615   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-244475-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:39:47.452689   22121 start.go:319] duration metric: took 21.277032539s to joinCluster
	I0916 10:39:47.452767   22121 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:39:47.453074   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:39:47.454538   22121 out.go:177] * Verifying Kubernetes components...
	I0916 10:39:47.455883   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:39:47.719826   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:39:47.771692   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:39:47.771937   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:39:47.771997   22121 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I0916 10:39:47.772181   22121 node_ready.go:35] waiting up to 6m0s for node "ha-244475-m02" to be "Ready" ...
	I0916 10:39:47.772291   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:47.772301   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:47.772311   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:47.772317   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:47.784039   22121 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0916 10:39:48.272953   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:48.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:48.272981   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:48.272992   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:48.276331   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:48.772467   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:48.772487   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:48.772495   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:48.772499   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:48.778807   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:39:49.272650   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:49.272673   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:49.272683   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:49.272688   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:49.277698   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:49.773047   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:49.773069   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:49.773079   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:49.773085   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:49.909815   22121 round_trippers.go:574] Response Status: 200 OK in 136 milliseconds
	I0916 10:39:49.910692   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:50.272950   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:50.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:50.272982   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:50.272987   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:50.277990   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:50.773159   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:50.773185   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:50.773196   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:50.773202   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:50.777386   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:51.273263   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:51.273286   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:51.273294   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:51.273300   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:51.277667   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:51.772471   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:51.772493   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:51.772502   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:51.772508   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:51.775526   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:52.272463   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:52.272487   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:52.272504   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:52.272510   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:52.276001   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:52.276862   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:52.772568   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:52.772591   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:52.772598   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:52.772603   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:52.775666   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:53.272574   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:53.272605   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:53.272614   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:53.272617   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:53.275866   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:53.773034   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:53.773057   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:53.773065   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:53.773069   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:53.910868   22121 round_trippers.go:574] Response Status: 200 OK in 137 milliseconds
	I0916 10:39:54.272908   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:54.272929   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:54.272937   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:54.272940   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:54.276365   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:54.276998   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:54.772373   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:54.772404   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:54.772412   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:54.772415   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:54.775406   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:39:55.272580   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:55.272602   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:55.272610   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:55.272614   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:55.275678   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:55.772739   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:55.772762   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:55.772769   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:55.772773   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:55.776656   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.273183   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:56.273204   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:56.273211   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:56.273216   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:56.276356   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.773388   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:56.773413   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:56.773426   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:56.773433   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:56.776782   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:56.777386   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:57.272950   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:57.272972   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:57.272979   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:57.272984   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:57.276364   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:57.773060   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:57.773081   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:57.773088   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:57.773092   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:57.776229   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:58.273206   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:58.273236   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:58.273248   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:58.273255   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:58.277169   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:58.773306   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:58.773325   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:58.773333   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:58.773336   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:58.776530   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:39:59.272613   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:59.272637   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:59.272647   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:59.272653   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:59.277029   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:39:59.277431   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:39:59.772793   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:39:59.772817   22121 round_trippers.go:469] Request Headers:
	I0916 10:39:59.772825   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:39:59.772829   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:39:59.776206   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:00.273273   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:00.273295   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:00.273308   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:00.273314   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:00.276740   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:00.772818   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:00.772841   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:00.772851   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:00.772857   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:00.776328   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:01.273273   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:01.273295   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:01.273304   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:01.273307   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:01.276670   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:01.772774   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:01.772805   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:01.772817   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:01.772824   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:01.777379   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:01.777815   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:40:02.273195   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:02.273218   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:02.273226   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:02.273231   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:02.276605   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:02.773027   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:02.773049   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:02.773057   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:02.773062   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:02.776120   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:03.273168   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:03.273191   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:03.273199   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:03.273206   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:03.276412   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:03.773044   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:03.773066   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:03.773074   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:03.773079   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:03.776511   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:04.272779   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:04.272803   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:04.272810   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:04.272814   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:04.276171   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:04.276879   22121 node_ready.go:53] node "ha-244475-m02" has status "Ready":"False"
	I0916 10:40:04.773259   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:04.773284   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:04.773291   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:04.773295   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:04.776687   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.272635   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.272667   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.272678   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.272687   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.275813   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.772434   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.772459   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.772469   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.772474   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.776455   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.777067   22121 node_ready.go:49] node "ha-244475-m02" has status "Ready":"True"
	I0916 10:40:05.777086   22121 node_ready.go:38] duration metric: took 18.004873295s for node "ha-244475-m02" to be "Ready" ...
	I0916 10:40:05.777095   22121 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:40:05.777206   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:05.777219   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.777229   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.777240   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.781640   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:05.787776   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.787847   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lzrg2
	I0916 10:40:05.787856   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.787863   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.787867   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.791078   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:05.791756   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.791771   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.791778   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.791784   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.794551   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.795202   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.795218   22121 pod_ready.go:82] duration metric: took 7.419929ms for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.795226   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.795282   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-m8fd7
	I0916 10:40:05.795290   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.795297   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.795302   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.798095   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.798774   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.798790   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.798797   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.798801   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.801421   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.801924   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.801938   22121 pod_ready.go:82] duration metric: took 6.704952ms for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.801945   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.801989   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475
	I0916 10:40:05.801997   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.802004   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.802008   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.804181   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.804710   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:05.804724   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.804730   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.804733   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.807387   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.808293   22121 pod_ready.go:93] pod "etcd-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.808307   22121 pod_ready.go:82] duration metric: took 6.357107ms for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.808315   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.808358   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m02
	I0916 10:40:05.808365   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.808372   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.808377   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.810955   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.811488   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:05.811500   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.811508   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.811512   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.814011   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:40:05.814463   22121 pod_ready.go:93] pod "etcd-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:05.814477   22121 pod_ready.go:82] duration metric: took 6.157572ms for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.814489   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:05.972835   22121 request.go:632] Waited for 158.29387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:40:05.972902   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:40:05.972922   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:05.972933   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:05.972943   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:05.976765   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.172937   22121 request.go:632] Waited for 195.355279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.172986   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.172992   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.172998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.173002   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.177033   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:06.177621   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.177640   22121 pod_ready.go:82] duration metric: took 363.14475ms for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.177648   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.373192   22121 request.go:632] Waited for 195.483207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:40:06.373244   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:40:06.373249   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.373257   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.373261   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.377043   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.573053   22121 request.go:632] Waited for 195.35028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:06.573108   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:06.573115   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.573136   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.573147   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.577118   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.577677   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.577694   22121 pod_ready.go:82] duration metric: took 400.039517ms for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.577703   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.772876   22121 request.go:632] Waited for 195.103028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:40:06.772951   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:40:06.772956   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.772964   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.772969   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.776182   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.973323   22121 request.go:632] Waited for 196.373099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.973376   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:06.973381   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:06.973387   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:06.973392   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:06.976489   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:06.977163   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:06.977180   22121 pod_ready.go:82] duration metric: took 399.471495ms for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:06.977190   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.173212   22121 request.go:632] Waited for 195.956208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:40:07.173293   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:40:07.173301   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.173312   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.173319   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.177006   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.373012   22121 request.go:632] Waited for 195.452852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:07.373136   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:07.373147   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.373157   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.373166   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.376520   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.376939   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:07.376955   22121 pod_ready.go:82] duration metric: took 399.760125ms for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.376963   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.573324   22121 request.go:632] Waited for 196.271916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:40:07.573394   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:40:07.573402   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.573413   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.573420   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.577193   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.773425   22121 request.go:632] Waited for 195.35678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:07.773476   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:07.773482   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.773488   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.773492   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.776987   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:07.777804   22121 pod_ready.go:93] pod "kube-proxy-crttt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:07.777823   22121 pod_ready.go:82] duration metric: took 400.853941ms for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.777832   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:07.972928   22121 request.go:632] Waited for 195.015591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:40:07.972986   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:40:07.972991   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:07.972998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:07.973004   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:07.976127   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.173342   22121 request.go:632] Waited for 196.327773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.173412   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.173420   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.173427   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.173433   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.177112   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.177778   22121 pod_ready.go:93] pod "kube-proxy-t454b" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.177799   22121 pod_ready.go:82] duration metric: took 399.960678ms for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.177812   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.372853   22121 request.go:632] Waited for 194.970978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:40:08.372917   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:40:08.372922   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.372929   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.372936   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.375975   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.572928   22121 request.go:632] Waited for 196.373637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:08.572977   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:40:08.572982   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.572989   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.572993   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.576124   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.576671   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.576689   22121 pod_ready.go:82] duration metric: took 398.869844ms for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.576697   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.773179   22121 request.go:632] Waited for 196.418181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:40:08.773233   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:40:08.773253   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.773265   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.773280   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.776328   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.973400   22121 request.go:632] Waited for 196.398623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.973450   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:40:08.973455   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:08.973462   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:08.973468   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:08.977143   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:08.977768   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:40:08.977788   22121 pod_ready.go:82] duration metric: took 401.084234ms for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:40:08.977801   22121 pod_ready.go:39] duration metric: took 3.200692542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:40:08.977817   22121 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:40:08.977871   22121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:40:09.001036   22121 api_server.go:72] duration metric: took 21.548229005s to wait for apiserver process to appear ...
	I0916 10:40:09.001060   22121 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:40:09.001082   22121 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0916 10:40:09.007410   22121 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0916 10:40:09.007485   22121 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I0916 10:40:09.007496   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.007508   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.007518   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.008301   22121 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:40:09.008412   22121 api_server.go:141] control plane version: v1.31.1
	I0916 10:40:09.008429   22121 api_server.go:131] duration metric: took 7.361874ms to wait for apiserver health ...
	I0916 10:40:09.008439   22121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:40:09.172861   22121 request.go:632] Waited for 164.349636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.172946   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.172952   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.172965   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.172969   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.177801   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.182059   22121 system_pods.go:59] 17 kube-system pods found
	I0916 10:40:09.182087   22121 system_pods.go:61] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:40:09.182142   22121 system_pods.go:61] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:40:09.182160   22121 system_pods.go:61] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:40:09.182173   22121 system_pods.go:61] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:40:09.182179   22121 system_pods.go:61] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:40:09.182183   22121 system_pods.go:61] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:40:09.182187   22121 system_pods.go:61] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:40:09.182191   22121 system_pods.go:61] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:40:09.182195   22121 system_pods.go:61] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:40:09.182198   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:40:09.182201   22121 system_pods.go:61] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:40:09.182205   22121 system_pods.go:61] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:40:09.182210   22121 system_pods.go:61] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:40:09.182214   22121 system_pods.go:61] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:40:09.182217   22121 system_pods.go:61] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:40:09.182221   22121 system_pods.go:61] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:40:09.182228   22121 system_pods.go:61] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:40:09.182236   22121 system_pods.go:74] duration metric: took 173.790059ms to wait for pod list to return data ...
	I0916 10:40:09.182248   22121 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:40:09.372607   22121 request.go:632] Waited for 190.269868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:40:09.372663   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:40:09.372669   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.372683   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.372701   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.377213   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.377421   22121 default_sa.go:45] found service account: "default"
	I0916 10:40:09.377440   22121 default_sa.go:55] duration metric: took 195.180856ms for default service account to be created ...
	I0916 10:40:09.377449   22121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:40:09.572867   22121 request.go:632] Waited for 195.351388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.572951   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:40:09.572958   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.572968   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.572975   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.577144   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:40:09.582372   22121 system_pods.go:86] 17 kube-system pods found
	I0916 10:40:09.582396   22121 system_pods.go:89] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:40:09.582401   22121 system_pods.go:89] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:40:09.582405   22121 system_pods.go:89] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:40:09.582409   22121 system_pods.go:89] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:40:09.582413   22121 system_pods.go:89] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:40:09.582417   22121 system_pods.go:89] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:40:09.582420   22121 system_pods.go:89] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:40:09.582423   22121 system_pods.go:89] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:40:09.582427   22121 system_pods.go:89] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:40:09.582430   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:40:09.582433   22121 system_pods.go:89] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:40:09.582436   22121 system_pods.go:89] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:40:09.582439   22121 system_pods.go:89] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:40:09.582442   22121 system_pods.go:89] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:40:09.582445   22121 system_pods.go:89] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:40:09.582448   22121 system_pods.go:89] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:40:09.582452   22121 system_pods.go:89] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:40:09.582457   22121 system_pods.go:126] duration metric: took 205.002675ms to wait for k8s-apps to be running ...
	I0916 10:40:09.582465   22121 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:40:09.582506   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:09.597644   22121 system_svc.go:56] duration metric: took 15.160872ms WaitForService to wait for kubelet
	I0916 10:40:09.597677   22121 kubeadm.go:582] duration metric: took 22.144873804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:40:09.597698   22121 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:40:09.773108   22121 request.go:632] Waited for 175.336097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I0916 10:40:09.773176   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I0916 10:40:09.773183   22121 round_trippers.go:469] Request Headers:
	I0916 10:40:09.773190   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:40:09.773195   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:40:09.776708   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:40:09.777452   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:40:09.777477   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:40:09.777490   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:40:09.777495   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:40:09.777501   22121 node_conditions.go:105] duration metric: took 179.797275ms to run NodePressure ...
	I0916 10:40:09.777515   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:40:09.777580   22121 start.go:255] writing updated cluster config ...
	I0916 10:40:09.779808   22121 out.go:201] 
	I0916 10:40:09.781239   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:09.781337   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:09.782835   22121 out.go:177] * Starting "ha-244475-m03" control-plane node in "ha-244475" cluster
	I0916 10:40:09.783977   22121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:40:09.783994   22121 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:09.784082   22121 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:40:09.784094   22121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:40:09.784186   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:09.784355   22121 start.go:360] acquireMachinesLock for ha-244475-m03: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:09.784415   22121 start.go:364] duration metric: took 40.424µs to acquireMachinesLock for "ha-244475-m03"
	I0916 10:40:09.784439   22121 start.go:93] Provisioning new machine with config: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:40:09.784543   22121 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 10:40:09.786219   22121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:40:09.786291   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:09.786324   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:09.801282   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35165
	I0916 10:40:09.801761   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:09.802231   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:09.802254   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:09.802548   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:09.802764   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:09.802865   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:09.802989   22121 start.go:159] libmachine.API.Create for "ha-244475" (driver="kvm2")
	I0916 10:40:09.803017   22121 client.go:168] LocalClient.Create starting
	I0916 10:40:09.803051   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 10:40:09.803091   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:09.803118   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:09.803183   22121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 10:40:09.803210   22121 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:09.803224   22121 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:09.803249   22121 main.go:141] libmachine: Running pre-create checks...
	I0916 10:40:09.803261   22121 main.go:141] libmachine: (ha-244475-m03) Calling .PreCreateCheck
	I0916 10:40:09.803404   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:09.803766   22121 main.go:141] libmachine: Creating machine...
	I0916 10:40:09.803781   22121 main.go:141] libmachine: (ha-244475-m03) Calling .Create
	I0916 10:40:09.803937   22121 main.go:141] libmachine: (ha-244475-m03) Creating KVM machine...
	I0916 10:40:09.805160   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found existing default KVM network
	I0916 10:40:09.805337   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found existing private KVM network mk-ha-244475
	I0916 10:40:09.805472   22121 main.go:141] libmachine: (ha-244475-m03) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 ...
	I0916 10:40:09.805493   22121 main.go:141] libmachine: (ha-244475-m03) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:40:09.805577   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:09.805472   22888 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:40:09.805636   22121 main.go:141] libmachine: (ha-244475-m03) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:40:10.039594   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.039469   22888 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa...
	I0916 10:40:10.482395   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.482296   22888 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/ha-244475-m03.rawdisk...
	I0916 10:40:10.482425   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Writing magic tar header
	I0916 10:40:10.482435   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Writing SSH key tar header
	I0916 10:40:10.482442   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:10.482411   22888 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 ...
	I0916 10:40:10.482520   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03
	I0916 10:40:10.482539   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03 (perms=drwx------)
	I0916 10:40:10.482546   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:40:10.482562   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 10:40:10.482573   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 10:40:10.482582   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 10:40:10.482591   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:40:10.482605   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:40:10.482619   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 10:40:10.482631   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:40:10.482639   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:40:10.482649   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Checking permissions on dir: /home
	I0916 10:40:10.482658   22121 main.go:141] libmachine: (ha-244475-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:40:10.482668   22121 main.go:141] libmachine: (ha-244475-m03) Creating domain...
	I0916 10:40:10.482675   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Skipping /home - not owner
	I0916 10:40:10.483703   22121 main.go:141] libmachine: (ha-244475-m03) define libvirt domain using xml: 
	I0916 10:40:10.483728   22121 main.go:141] libmachine: (ha-244475-m03) <domain type='kvm'>
	I0916 10:40:10.483739   22121 main.go:141] libmachine: (ha-244475-m03)   <name>ha-244475-m03</name>
	I0916 10:40:10.483746   22121 main.go:141] libmachine: (ha-244475-m03)   <memory unit='MiB'>2200</memory>
	I0916 10:40:10.483755   22121 main.go:141] libmachine: (ha-244475-m03)   <vcpu>2</vcpu>
	I0916 10:40:10.483762   22121 main.go:141] libmachine: (ha-244475-m03)   <features>
	I0916 10:40:10.483767   22121 main.go:141] libmachine: (ha-244475-m03)     <acpi/>
	I0916 10:40:10.483774   22121 main.go:141] libmachine: (ha-244475-m03)     <apic/>
	I0916 10:40:10.483780   22121 main.go:141] libmachine: (ha-244475-m03)     <pae/>
	I0916 10:40:10.483786   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.483791   22121 main.go:141] libmachine: (ha-244475-m03)   </features>
	I0916 10:40:10.483799   22121 main.go:141] libmachine: (ha-244475-m03)   <cpu mode='host-passthrough'>
	I0916 10:40:10.483821   22121 main.go:141] libmachine: (ha-244475-m03)   
	I0916 10:40:10.483839   22121 main.go:141] libmachine: (ha-244475-m03)   </cpu>
	I0916 10:40:10.483851   22121 main.go:141] libmachine: (ha-244475-m03)   <os>
	I0916 10:40:10.483859   22121 main.go:141] libmachine: (ha-244475-m03)     <type>hvm</type>
	I0916 10:40:10.483867   22121 main.go:141] libmachine: (ha-244475-m03)     <boot dev='cdrom'/>
	I0916 10:40:10.483882   22121 main.go:141] libmachine: (ha-244475-m03)     <boot dev='hd'/>
	I0916 10:40:10.483893   22121 main.go:141] libmachine: (ha-244475-m03)     <bootmenu enable='no'/>
	I0916 10:40:10.483900   22121 main.go:141] libmachine: (ha-244475-m03)   </os>
	I0916 10:40:10.483911   22121 main.go:141] libmachine: (ha-244475-m03)   <devices>
	I0916 10:40:10.483918   22121 main.go:141] libmachine: (ha-244475-m03)     <disk type='file' device='cdrom'>
	I0916 10:40:10.483926   22121 main.go:141] libmachine: (ha-244475-m03)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/boot2docker.iso'/>
	I0916 10:40:10.483933   22121 main.go:141] libmachine: (ha-244475-m03)       <target dev='hdc' bus='scsi'/>
	I0916 10:40:10.483938   22121 main.go:141] libmachine: (ha-244475-m03)       <readonly/>
	I0916 10:40:10.483942   22121 main.go:141] libmachine: (ha-244475-m03)     </disk>
	I0916 10:40:10.483948   22121 main.go:141] libmachine: (ha-244475-m03)     <disk type='file' device='disk'>
	I0916 10:40:10.483956   22121 main.go:141] libmachine: (ha-244475-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:40:10.483963   22121 main.go:141] libmachine: (ha-244475-m03)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/ha-244475-m03.rawdisk'/>
	I0916 10:40:10.483975   22121 main.go:141] libmachine: (ha-244475-m03)       <target dev='hda' bus='virtio'/>
	I0916 10:40:10.483985   22121 main.go:141] libmachine: (ha-244475-m03)     </disk>
	I0916 10:40:10.483992   22121 main.go:141] libmachine: (ha-244475-m03)     <interface type='network'>
	I0916 10:40:10.484004   22121 main.go:141] libmachine: (ha-244475-m03)       <source network='mk-ha-244475'/>
	I0916 10:40:10.484015   22121 main.go:141] libmachine: (ha-244475-m03)       <model type='virtio'/>
	I0916 10:40:10.484023   22121 main.go:141] libmachine: (ha-244475-m03)     </interface>
	I0916 10:40:10.484028   22121 main.go:141] libmachine: (ha-244475-m03)     <interface type='network'>
	I0916 10:40:10.484035   22121 main.go:141] libmachine: (ha-244475-m03)       <source network='default'/>
	I0916 10:40:10.484040   22121 main.go:141] libmachine: (ha-244475-m03)       <model type='virtio'/>
	I0916 10:40:10.484046   22121 main.go:141] libmachine: (ha-244475-m03)     </interface>
	I0916 10:40:10.484052   22121 main.go:141] libmachine: (ha-244475-m03)     <serial type='pty'>
	I0916 10:40:10.484059   22121 main.go:141] libmachine: (ha-244475-m03)       <target port='0'/>
	I0916 10:40:10.484063   22121 main.go:141] libmachine: (ha-244475-m03)     </serial>
	I0916 10:40:10.484072   22121 main.go:141] libmachine: (ha-244475-m03)     <console type='pty'>
	I0916 10:40:10.484087   22121 main.go:141] libmachine: (ha-244475-m03)       <target type='serial' port='0'/>
	I0916 10:40:10.484099   22121 main.go:141] libmachine: (ha-244475-m03)     </console>
	I0916 10:40:10.484108   22121 main.go:141] libmachine: (ha-244475-m03)     <rng model='virtio'>
	I0916 10:40:10.484116   22121 main.go:141] libmachine: (ha-244475-m03)       <backend model='random'>/dev/random</backend>
	I0916 10:40:10.484122   22121 main.go:141] libmachine: (ha-244475-m03)     </rng>
	I0916 10:40:10.484126   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.484132   22121 main.go:141] libmachine: (ha-244475-m03)     
	I0916 10:40:10.484137   22121 main.go:141] libmachine: (ha-244475-m03)   </devices>
	I0916 10:40:10.484143   22121 main.go:141] libmachine: (ha-244475-m03) </domain>
	I0916 10:40:10.484163   22121 main.go:141] libmachine: (ha-244475-m03) 
	I0916 10:40:10.491278   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:3c:e8:d0 in network default
	I0916 10:40:10.491751   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring networks are active...
	I0916 10:40:10.491768   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:10.492390   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring network default is active
	I0916 10:40:10.492675   22121 main.go:141] libmachine: (ha-244475-m03) Ensuring network mk-ha-244475 is active
	I0916 10:40:10.493062   22121 main.go:141] libmachine: (ha-244475-m03) Getting domain xml...
	I0916 10:40:10.493756   22121 main.go:141] libmachine: (ha-244475-m03) Creating domain...
	I0916 10:40:11.721484   22121 main.go:141] libmachine: (ha-244475-m03) Waiting to get IP...
	I0916 10:40:11.722386   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:11.722825   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:11.722864   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:11.722811   22888 retry.go:31] will retry after 192.331481ms: waiting for machine to come up
	I0916 10:40:11.917419   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:11.917971   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:11.918005   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:11.917942   22888 retry.go:31] will retry after 286.90636ms: waiting for machine to come up
	I0916 10:40:12.206353   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:12.206819   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:12.206842   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:12.206741   22888 retry.go:31] will retry after 454.064197ms: waiting for machine to come up
	I0916 10:40:12.662050   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:12.662526   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:12.662551   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:12.662476   22888 retry.go:31] will retry after 438.548468ms: waiting for machine to come up
	I0916 10:40:13.103062   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:13.103558   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:13.103595   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:13.103500   22888 retry.go:31] will retry after 487.216711ms: waiting for machine to come up
	I0916 10:40:13.592041   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:13.592483   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:13.592504   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:13.592433   22888 retry.go:31] will retry after 609.860378ms: waiting for machine to come up
	I0916 10:40:14.204217   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:14.204729   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:14.204756   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:14.204687   22888 retry.go:31] will retry after 1.08416226s: waiting for machine to come up
	I0916 10:40:15.290010   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:15.290367   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:15.290395   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:15.290306   22888 retry.go:31] will retry after 1.14272633s: waiting for machine to come up
	I0916 10:40:16.434131   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:16.434447   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:16.434482   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:16.434408   22888 retry.go:31] will retry after 1.591492555s: waiting for machine to come up
	I0916 10:40:18.027328   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:18.027798   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:18.027827   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:18.027750   22888 retry.go:31] will retry after 1.626003631s: waiting for machine to come up
	I0916 10:40:19.655097   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:19.655517   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:19.655538   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:19.655472   22888 retry.go:31] will retry after 2.828805673s: waiting for machine to come up
	I0916 10:40:22.487722   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:22.488228   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:22.488249   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:22.488180   22888 retry.go:31] will retry after 2.947934423s: waiting for machine to come up
	I0916 10:40:25.437771   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:25.438163   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:25.438187   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:25.438126   22888 retry.go:31] will retry after 4.191813461s: waiting for machine to come up
	I0916 10:40:29.634188   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:29.634591   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find current IP address of domain ha-244475-m03 in network mk-ha-244475
	I0916 10:40:29.634611   22121 main.go:141] libmachine: (ha-244475-m03) DBG | I0916 10:40:29.634550   22888 retry.go:31] will retry after 4.912264836s: waiting for machine to come up
	I0916 10:40:34.550076   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.550468   22121 main.go:141] libmachine: (ha-244475-m03) Found IP for machine: 192.168.39.127
	I0916 10:40:34.550500   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has current primary IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.550516   22121 main.go:141] libmachine: (ha-244475-m03) Reserving static IP address...
	I0916 10:40:34.550823   22121 main.go:141] libmachine: (ha-244475-m03) DBG | unable to find host DHCP lease matching {name: "ha-244475-m03", mac: "52:54:00:e0:15:60", ip: "192.168.39.127"} in network mk-ha-244475
	I0916 10:40:34.624068   22121 main.go:141] libmachine: (ha-244475-m03) Reserved static IP address: 192.168.39.127
	I0916 10:40:34.624092   22121 main.go:141] libmachine: (ha-244475-m03) Waiting for SSH to be available...
	I0916 10:40:34.624101   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Getting to WaitForSSH function...
	I0916 10:40:34.626630   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.627078   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.627178   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.627199   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using SSH client type: external
	I0916 10:40:34.627216   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa (-rw-------)
	I0916 10:40:34.627249   22121 main.go:141] libmachine: (ha-244475-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:40:34.627256   22121 main.go:141] libmachine: (ha-244475-m03) DBG | About to run SSH command:
	I0916 10:40:34.627270   22121 main.go:141] libmachine: (ha-244475-m03) DBG | exit 0
	I0916 10:40:34.749330   22121 main.go:141] libmachine: (ha-244475-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 10:40:34.749611   22121 main.go:141] libmachine: (ha-244475-m03) KVM machine creation complete!
	I0916 10:40:34.749933   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:34.750501   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:34.750684   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:34.750811   22121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:40:34.750833   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:40:34.752727   22121 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:40:34.752744   22121 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:40:34.752751   22121 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:40:34.752759   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.755291   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.755682   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.755717   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.755865   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.756023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.756183   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.756327   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.756485   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.756665   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.756675   22121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:40:34.856271   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:34.856293   22121 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:40:34.856300   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.859855   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.860190   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.860221   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.860431   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.860594   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.860766   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.860894   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.861049   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.861260   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.861271   22121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:40:34.970117   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:40:34.970189   22121 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:40:34.970202   22121 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:40:34.970213   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:34.970470   22121 buildroot.go:166] provisioning hostname "ha-244475-m03"
	I0916 10:40:34.970497   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:34.970663   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:34.973291   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.973662   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:34.973691   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:34.973816   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:34.973997   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.974137   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:34.974267   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:34.974444   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:34.974644   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:34.974660   22121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475-m03 && echo "ha-244475-m03" | sudo tee /etc/hostname
	I0916 10:40:35.095518   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475-m03
	
	I0916 10:40:35.095558   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.098544   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.098924   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.098964   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.099171   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.099391   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.099555   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.099700   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.099862   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.100037   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.100059   22121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:40:35.210957   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:35.210985   22121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:40:35.211006   22121 buildroot.go:174] setting up certificates
	I0916 10:40:35.211018   22121 provision.go:84] configureAuth start
	I0916 10:40:35.211028   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetMachineName
	I0916 10:40:35.211274   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:35.213869   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.214151   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.214179   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.214333   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.216656   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.217068   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.217094   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.217230   22121 provision.go:143] copyHostCerts
	I0916 10:40:35.217262   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:40:35.217292   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:40:35.217301   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:40:35.217370   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:40:35.217472   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:40:35.217491   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:40:35.217498   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:40:35.217524   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:40:35.217564   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:40:35.217581   22121 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:40:35.217587   22121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:40:35.217606   22121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:40:35.217660   22121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475-m03 san=[127.0.0.1 192.168.39.127 ha-244475-m03 localhost minikube]
	I0916 10:40:35.412945   22121 provision.go:177] copyRemoteCerts
	I0916 10:40:35.412999   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:40:35.413023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.415370   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.415731   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.415761   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.415904   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.416091   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.416250   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.416351   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:35.501393   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:40:35.501489   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:40:35.529014   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:40:35.529098   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:40:35.555006   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:40:35.555088   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:40:35.580082   22121 provision.go:87] duration metric: took 369.052998ms to configureAuth
	I0916 10:40:35.580114   22121 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:40:35.580375   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:35.580459   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.582981   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.583302   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.583338   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.583522   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.583678   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.583829   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.583953   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.584080   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.584280   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.584295   22121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:40:35.804379   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:40:35.804403   22121 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:40:35.804410   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetURL
	I0916 10:40:35.805786   22121 main.go:141] libmachine: (ha-244475-m03) DBG | Using libvirt version 6000000
	I0916 10:40:35.807818   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.808192   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.808220   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.808371   22121 main.go:141] libmachine: Docker is up and running!
	I0916 10:40:35.808384   22121 main.go:141] libmachine: Reticulating splines...
	I0916 10:40:35.808390   22121 client.go:171] duration metric: took 26.005363468s to LocalClient.Create
	I0916 10:40:35.808410   22121 start.go:167] duration metric: took 26.005420857s to libmachine.API.Create "ha-244475"
	I0916 10:40:35.808417   22121 start.go:293] postStartSetup for "ha-244475-m03" (driver="kvm2")
	I0916 10:40:35.808441   22121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:40:35.808457   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:35.808682   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:40:35.808703   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.810634   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.810894   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.810919   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.811023   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.811207   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.811350   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.811483   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:35.891724   22121 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:40:35.896159   22121 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:40:35.896180   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:40:35.896236   22121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:40:35.896302   22121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:40:35.896311   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:40:35.896394   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:40:35.906252   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:40:35.931184   22121 start.go:296] duration metric: took 122.750991ms for postStartSetup
	I0916 10:40:35.931237   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetConfigRaw
	I0916 10:40:35.931826   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:35.934282   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.934635   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.934663   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.934920   22121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:40:35.935111   22121 start.go:128] duration metric: took 26.150558333s to createHost
	I0916 10:40:35.935133   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:35.937290   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.937626   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:35.937654   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:35.937784   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:35.937961   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.938124   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:35.938226   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:35.938360   22121 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:35.938514   22121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0916 10:40:35.938523   22121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:40:36.038169   22121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483236.017253853
	
	I0916 10:40:36.038199   22121 fix.go:216] guest clock: 1726483236.017253853
	I0916 10:40:36.038211   22121 fix.go:229] Guest: 2024-09-16 10:40:36.017253853 +0000 UTC Remote: 2024-09-16 10:40:35.935121788 +0000 UTC m=+143.767887540 (delta=82.132065ms)
	I0916 10:40:36.038234   22121 fix.go:200] guest clock delta is within tolerance: 82.132065ms
	I0916 10:40:36.038242   22121 start.go:83] releasing machines lock for "ha-244475-m03", held for 26.253815031s
	I0916 10:40:36.038269   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.038526   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:36.041199   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.041528   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.041557   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.043873   22121 out.go:177] * Found network options:
	I0916 10:40:36.045262   22121 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.222
	W0916 10:40:36.046405   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:40:36.046427   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:40:36.046443   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.046990   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.047176   22121 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:40:36.047272   22121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:40:36.047304   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	W0916 10:40:36.047328   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:40:36.047347   22121 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:40:36.047416   22121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:40:36.047437   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:40:36.049999   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050208   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050428   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.050455   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050554   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:36.050601   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:36.050626   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:36.050708   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:36.050785   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:40:36.050860   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:36.050941   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:40:36.051014   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:36.051036   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:40:36.051131   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:40:36.283731   22121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:40:36.291646   22121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:40:36.291714   22121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:40:36.309353   22121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:40:36.309377   22121 start.go:495] detecting cgroup driver to use...
	I0916 10:40:36.309434   22121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:40:36.327071   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:40:36.341542   22121 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:40:36.341601   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:40:36.355583   22121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:40:36.369888   22121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:40:36.493273   22121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:40:36.643904   22121 docker.go:233] disabling docker service ...
	I0916 10:40:36.643965   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:40:36.658738   22121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:40:36.672641   22121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:40:36.816431   22121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:40:36.933082   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:40:36.949104   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:40:36.970988   22121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:40:36.971047   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:36.982120   22121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:40:36.982182   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:36.993929   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.005695   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.018804   22121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:40:37.031297   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.042548   22121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.060622   22121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:40:37.071900   22121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:40:37.082293   22121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 10:40:37.082349   22121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 10:40:37.096317   22121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:40:37.107422   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:40:37.228410   22121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:40:37.320979   22121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:40:37.321071   22121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:40:37.326439   22121 start.go:563] Will wait 60s for crictl version
	I0916 10:40:37.326501   22121 ssh_runner.go:195] Run: which crictl
	I0916 10:40:37.330626   22121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:40:37.369842   22121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:40:37.369916   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:40:37.402403   22121 ssh_runner.go:195] Run: crio --version
	I0916 10:40:37.437976   22121 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:40:37.439411   22121 out.go:177]   - env NO_PROXY=192.168.39.19
	I0916 10:40:37.440926   22121 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.222
	I0916 10:40:37.442203   22121 main.go:141] libmachine: (ha-244475-m03) Calling .GetIP
	I0916 10:40:37.444743   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:37.445187   22121 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:40:37.445214   22121 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:40:37.445428   22121 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:40:37.449788   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:40:37.464525   22121 mustload.go:65] Loading cluster: ha-244475
	I0916 10:40:37.464778   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:40:37.465171   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:37.465220   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:37.480904   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0916 10:40:37.481370   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:37.481925   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:37.481949   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:37.482292   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:37.482464   22121 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:40:37.484020   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:40:37.484287   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:37.484324   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:37.498953   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I0916 10:40:37.499388   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:37.499929   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:37.499955   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:37.500321   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:37.500505   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:40:37.500708   22121 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.127
	I0916 10:40:37.500720   22121 certs.go:194] generating shared ca certs ...
	I0916 10:40:37.500740   22121 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.500875   22121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:40:37.500929   22121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:40:37.500943   22121 certs.go:256] generating profile certs ...
	I0916 10:40:37.501030   22121 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:40:37.501062   22121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b
	I0916 10:40:37.501082   22121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.127 192.168.39.254]
	I0916 10:40:37.647069   22121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b ...
	I0916 10:40:37.647103   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b: {Name:mkbb6bf2be5e587ad1e2fe147b3983eed0461a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.647322   22121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b ...
	I0916 10:40:37.647347   22121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b: {Name:mk98dd7442f0dc4e7003471cb55a0345916f7a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:37.647450   22121 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.ff67242b -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:40:37.647652   22121 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.ff67242b -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:40:37.647850   22121 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:40:37.647872   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:40:37.647891   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:40:37.647911   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:40:37.647929   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:40:37.647946   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:40:37.647963   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:40:37.647981   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:40:37.647998   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:40:37.648062   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:40:37.648100   22121 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:40:37.648112   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:40:37.648144   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:40:37.648175   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:40:37.648204   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:40:37.648262   22121 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:40:37.648302   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:40:37.648320   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:37.648380   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:40:37.648422   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:40:37.651389   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:37.651840   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:40:37.651860   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:37.652040   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:40:37.652216   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:40:37.652315   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:40:37.652394   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:40:37.729506   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:40:37.734982   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:40:37.746820   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:40:37.751379   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 10:40:37.763059   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:40:37.767743   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:40:37.780679   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:40:37.785070   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 10:40:37.796662   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:40:37.801157   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:40:37.812496   22121 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:40:37.817564   22121 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:40:37.829016   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:40:37.857371   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:40:37.883089   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:40:37.908995   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:40:37.935029   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:40:37.960446   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:40:37.986136   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:40:38.012431   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:40:38.047057   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:40:38.075002   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:40:38.101902   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:40:38.129296   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:40:38.148327   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 10:40:38.165421   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:40:38.182509   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 10:40:38.200200   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:40:38.216843   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:40:38.233538   22121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:40:38.250144   22121 ssh_runner.go:195] Run: openssl version
	I0916 10:40:38.256117   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:40:38.267112   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.271742   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.271789   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:40:38.277670   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:40:38.288768   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:40:38.299987   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.304531   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.304588   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:40:38.310343   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:40:38.321868   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:40:38.333013   22121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.337929   22121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.337983   22121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:40:38.343812   22121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:40:38.354695   22121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:40:38.358776   22121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:40:38.358821   22121 kubeadm.go:934] updating node {m03 192.168.39.127 8443 v1.31.1 crio true true} ...
	I0916 10:40:38.358893   22121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:40:38.358916   22121 kube-vip.go:115] generating kube-vip config ...
	I0916 10:40:38.358947   22121 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:40:38.376976   22121 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:40:38.377036   22121 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:40:38.377091   22121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:40:38.386658   22121 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:40:38.386709   22121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:40:38.397169   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:40:38.397180   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:40:38.397205   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:40:38.397221   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:40:38.397225   22121 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:40:38.397245   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:40:38.397272   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:40:38.397322   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:40:38.414712   22121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:40:38.414816   22121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:40:38.414828   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 10:40:38.414843   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 10:40:38.414851   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:40:38.414867   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:40:38.425835   22121 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 10:40:38.425882   22121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:40:39.292544   22121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:40:39.302520   22121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:40:39.321739   22121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:40:39.339714   22121 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:40:39.356647   22121 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:40:39.360860   22121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:40:39.373051   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:40:39.503177   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:40:39.521517   22121 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:40:39.521933   22121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:40:39.521999   22121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:40:39.539241   22121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0916 10:40:39.539779   22121 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:40:39.540277   22121 main.go:141] libmachine: Using API Version  1
	I0916 10:40:39.540296   22121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:40:39.540592   22121 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:40:39.540793   22121 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:40:39.540980   22121 start.go:317] joinCluster: &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:39.541103   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:40:39.541140   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:40:39.544084   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:39.544467   22121 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:40:39.544489   22121 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:40:39.544609   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:40:39.544797   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:40:39.544947   22121 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:40:39.545069   22121 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:40:39.712936   22121 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:40:39.712986   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4c794a.yzkn6fbxc862odl2 --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443"
	I0916 10:41:02.405074   22121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4c794a.yzkn6fbxc862odl2 --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-244475-m03 --control-plane --apiserver-advertise-address=192.168.39.127 --apiserver-bind-port=8443": (22.692059229s)
	I0916 10:41:02.405117   22121 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:41:02.989273   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-244475-m03 minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-244475 minikube.k8s.io/primary=false
	I0916 10:41:03.155780   22121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-244475-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:41:03.294611   22121 start.go:319] duration metric: took 23.75362709s to joinCluster
	I0916 10:41:03.294689   22121 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 10:41:03.295014   22121 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:41:03.296058   22121 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:03.297444   22121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:03.509480   22121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:03.527697   22121 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:41:03.527973   22121 kapi.go:59] client config for ha-244475: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:41:03.528069   22121 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I0916 10:41:03.528297   22121 node_ready.go:35] waiting up to 6m0s for node "ha-244475-m03" to be "Ready" ...
	I0916 10:41:03.528381   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:03.528392   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:03.528403   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:03.528409   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:03.535009   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:04.028547   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:04.028568   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:04.028577   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:04.028590   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:04.032000   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:04.528593   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:04.528621   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:04.528632   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:04.528639   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:04.531853   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:05.028474   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:05.028495   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:05.028507   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:05.028510   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:05.031970   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:05.529004   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:05.529030   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:05.529040   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:05.529046   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:05.534346   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:05.535149   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:06.028524   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:06.028552   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:06.028563   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:06.028568   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:06.031926   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:06.529358   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:06.529383   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:06.529396   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:06.529402   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:06.535725   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:07.028522   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:07.028543   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:07.028551   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:07.028557   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:07.032906   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:07.529385   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:07.529413   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:07.529425   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:07.529431   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:07.535794   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:07.536408   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:08.029514   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:08.029549   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:08.029561   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:08.029567   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:08.032852   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:08.528497   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:08.528520   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:08.528529   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:08.528535   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:08.532921   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:09.028942   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:09.028962   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:09.028969   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:09.028972   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:09.032474   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:09.528551   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:09.528576   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:09.528586   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:09.528591   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:09.532995   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:10.028544   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:10.028577   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:10.028584   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:10.028588   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:10.032079   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:10.032575   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:10.528902   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:10.528926   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:10.528934   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:10.528938   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:10.535638   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:11.028651   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:11.028672   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:11.028679   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:11.028682   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:11.032105   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:11.529486   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:11.529515   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:11.529526   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:11.529531   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:11.535563   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:12.029412   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:12.029432   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:12.029440   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:12.029444   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:12.033149   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:12.033738   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:12.528711   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:12.528733   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:12.528742   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:12.528746   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:12.534586   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:13.029512   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:13.029536   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:13.029547   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:13.029553   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:13.033681   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:13.529522   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:13.529548   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:13.529559   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:13.529566   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:13.533930   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:14.029172   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:14.029194   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:14.029202   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:14.029206   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:14.032272   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:14.529072   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:14.529094   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:14.529102   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:14.529107   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:14.535318   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:14.535890   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:15.029077   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:15.029101   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:15.029113   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:15.029122   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:15.032652   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:15.528843   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:15.528869   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:15.528876   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:15.528883   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:15.533117   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:16.028968   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:16.028990   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:16.028998   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:16.029002   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:16.032289   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:16.528776   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:16.528800   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:16.528812   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:16.528820   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:16.532317   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:17.029247   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:17.029273   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:17.029283   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:17.029289   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:17.032437   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:17.032978   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:17.528914   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:17.528940   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:17.528951   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:17.528957   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:17.535109   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:18.028865   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:18.028886   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:18.028894   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:18.028897   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:18.032181   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:18.529133   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:18.529160   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:18.529172   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:18.529177   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:18.532540   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:19.028551   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:19.028571   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:19.028579   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:19.028584   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:19.031968   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:19.529456   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:19.529479   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:19.529487   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:19.529492   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:19.535044   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:19.535889   22121 node_ready.go:53] node "ha-244475-m03" has status "Ready":"False"
	I0916 10:41:20.029083   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.029103   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.029111   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.029114   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.032351   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.529324   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.529353   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.529370   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.529376   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.532351   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.532942   22121 node_ready.go:49] node "ha-244475-m03" has status "Ready":"True"
	I0916 10:41:20.532967   22121 node_ready.go:38] duration metric: took 17.004653976s for node "ha-244475-m03" to be "Ready" ...
	I0916 10:41:20.532978   22121 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:20.533057   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:20.533074   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.533084   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.533092   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.541611   22121 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:41:20.549215   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.549300   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-lzrg2
	I0916 10:41:20.549309   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.549316   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.549321   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.551990   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.552792   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.552807   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.552814   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.552819   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.555246   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.556034   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.556051   22121 pod_ready.go:82] duration metric: took 6.810223ms for pod "coredns-7c65d6cfc9-lzrg2" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.556059   22121 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.556109   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-m8fd7
	I0916 10:41:20.556118   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.556124   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.556129   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.558530   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.559188   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.559202   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.559209   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.559212   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.561354   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.561890   22121 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.561910   22121 pod_ready.go:82] duration metric: took 5.84501ms for pod "coredns-7c65d6cfc9-m8fd7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.561921   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.561982   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475
	I0916 10:41:20.561993   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.561999   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.562003   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.564349   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.565030   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:20.565042   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.565047   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.565051   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.567656   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.568101   22121 pod_ready.go:93] pod "etcd-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.568115   22121 pod_ready.go:82] duration metric: took 6.18818ms for pod "etcd-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.568126   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.568174   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m02
	I0916 10:41:20.568183   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.568191   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.568196   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.571051   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.572108   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:20.572122   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.572131   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.572136   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.574514   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:20.574938   22121 pod_ready.go:93] pod "etcd-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.574958   22121 pod_ready.go:82] duration metric: took 6.825238ms for pod "etcd-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.574968   22121 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.730339   22121 request.go:632] Waited for 155.28324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m03
	I0916 10:41:20.730409   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-244475-m03
	I0916 10:41:20.730416   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.730426   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.730434   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.733792   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.929868   22121 request.go:632] Waited for 195.353662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.929934   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:20.929941   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:20.929951   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:20.929956   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:20.933157   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:20.933861   22121 pod_ready.go:93] pod "etcd-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:20.933879   22121 pod_ready.go:82] duration metric: took 358.903224ms for pod "etcd-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:20.933899   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.130218   22121 request.go:632] Waited for 196.250965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:41:21.130279   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475
	I0916 10:41:21.130287   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.130297   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.130307   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.133197   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:21.330203   22121 request.go:632] Waited for 196.304187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:21.330250   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:21.330254   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.330262   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.330265   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.333309   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:21.333928   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:21.333946   22121 pod_ready.go:82] duration metric: took 400.041237ms for pod "kube-apiserver-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.333957   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.530002   22121 request.go:632] Waited for 195.934393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:41:21.530071   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m02
	I0916 10:41:21.530079   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.530089   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.530097   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.540600   22121 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:41:21.729634   22121 request.go:632] Waited for 188.35156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:21.729700   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:21.729712   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.729727   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.729736   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.733214   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:21.733789   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:21.733804   22121 pod_ready.go:82] duration metric: took 399.837781ms for pod "kube-apiserver-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.733813   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:21.930001   22121 request.go:632] Waited for 196.125954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m03
	I0916 10:41:21.930071   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-244475-m03
	I0916 10:41:21.930080   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:21.930088   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:21.930093   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:21.933477   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.129642   22121 request.go:632] Waited for 195.348961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:22.129729   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:22.129740   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.129750   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.129758   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.137037   22121 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:41:22.137643   22121 pod_ready.go:93] pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.137664   22121 pod_ready.go:82] duration metric: took 403.843897ms for pod "kube-apiserver-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.137678   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.329532   22121 request.go:632] Waited for 191.776666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:41:22.329621   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475
	I0916 10:41:22.329633   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.329640   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.329645   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.333345   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.530006   22121 request.go:632] Waited for 195.956457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:22.530079   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:22.530085   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.530093   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.530101   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.533113   22121 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:22.533700   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.533718   22121 pod_ready.go:82] duration metric: took 396.032752ms for pod "kube-controller-manager-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.533728   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.729791   22121 request.go:632] Waited for 195.998005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:41:22.729857   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m02
	I0916 10:41:22.729864   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.729874   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.729910   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.734399   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:22.929502   22121 request.go:632] Waited for 194.264694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:22.929574   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:22.929582   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:22.929591   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:22.929595   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:22.932871   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:22.934055   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:22.934073   22121 pod_ready.go:82] duration metric: took 400.337784ms for pod "kube-controller-manager-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:22.934082   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.130261   22121 request.go:632] Waited for 196.120217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m03
	I0916 10:41:23.130357   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-244475-m03
	I0916 10:41:23.130367   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.130375   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.130380   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.134472   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:23.329661   22121 request.go:632] Waited for 194.357343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:23.329723   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:23.329733   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.329747   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.329754   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.333236   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:23.333984   22121 pod_ready.go:93] pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:23.334009   22121 pod_ready.go:82] duration metric: took 399.919835ms for pod "kube-controller-manager-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.334026   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.530101   22121 request.go:632] Waited for 195.996765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:41:23.530191   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crttt
	I0916 10:41:23.530198   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.530208   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.530219   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.535501   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:23.729541   22121 request.go:632] Waited for 193.385559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:23.729601   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:23.729606   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.729613   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.729627   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.733179   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:23.733969   22121 pod_ready.go:93] pod "kube-proxy-crttt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:23.733986   22121 pod_ready.go:82] duration metric: took 399.951283ms for pod "kube-proxy-crttt" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.733995   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5v5l" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:23.929754   22121 request.go:632] Waited for 195.67228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5v5l
	I0916 10:41:23.929814   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5v5l
	I0916 10:41:23.929819   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:23.929826   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:23.929831   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:23.933527   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.129706   22121 request.go:632] Waited for 195.381059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:24.129770   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:24.129776   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.129786   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.129794   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.133530   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.134153   22121 pod_ready.go:93] pod "kube-proxy-g5v5l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.134171   22121 pod_ready.go:82] duration metric: took 400.17004ms for pod "kube-proxy-g5v5l" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.134180   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.330300   22121 request.go:632] Waited for 196.037638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:41:24.330367   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t454b
	I0916 10:41:24.330373   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.330384   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.330391   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.334038   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.530069   22121 request.go:632] Waited for 195.337849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:24.530145   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:24.530153   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.530160   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.530165   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.536414   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:24.536846   22121 pod_ready.go:93] pod "kube-proxy-t454b" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.536864   22121 pod_ready.go:82] duration metric: took 402.676992ms for pod "kube-proxy-t454b" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.536876   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.730273   22121 request.go:632] Waited for 193.335182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:41:24.730344   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475
	I0916 10:41:24.730349   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.730357   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.730365   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.733832   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.930161   22121 request.go:632] Waited for 195.330427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:24.930225   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475
	I0916 10:41:24.930241   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:24.930250   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:24.930259   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:24.933553   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:24.934318   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:24.934335   22121 pod_ready.go:82] duration metric: took 397.451613ms for pod "kube-scheduler-ha-244475" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:24.934344   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.129510   22121 request.go:632] Waited for 195.10302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:41:25.129579   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m02
	I0916 10:41:25.129587   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.129595   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.129600   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.133734   22121 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:41:25.329835   22121 request.go:632] Waited for 195.396951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:25.329904   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m02
	I0916 10:41:25.329912   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.329922   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.329928   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.333482   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:25.334323   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:25.334342   22121 pod_ready.go:82] duration metric: took 399.990647ms for pod "kube-scheduler-ha-244475-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.334355   22121 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.529377   22121 request.go:632] Waited for 194.946933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m03
	I0916 10:41:25.529470   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-244475-m03
	I0916 10:41:25.529482   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.529493   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.529501   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.534845   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:25.729925   22121 request.go:632] Waited for 194.359506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:25.729987   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-244475-m03
	I0916 10:41:25.729993   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.730000   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.730005   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.733288   22121 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:41:25.734036   22121 pod_ready.go:93] pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:25.734056   22121 pod_ready.go:82] duration metric: took 399.693479ms for pod "kube-scheduler-ha-244475-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:25.734069   22121 pod_ready.go:39] duration metric: took 5.201079342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:25.734086   22121 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:41:25.734140   22121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:25.749396   22121 api_server.go:72] duration metric: took 22.454672004s to wait for apiserver process to appear ...
	I0916 10:41:25.749425   22121 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:41:25.749447   22121 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0916 10:41:25.753676   22121 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0916 10:41:25.753738   22121 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I0916 10:41:25.753749   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.753760   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.753769   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.755474   22121 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:25.755537   22121 api_server.go:141] control plane version: v1.31.1
	I0916 10:41:25.755552   22121 api_server.go:131] duration metric: took 6.119804ms to wait for apiserver health ...
	I0916 10:41:25.755561   22121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:41:25.929957   22121 request.go:632] Waited for 174.326859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:25.930008   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:25.930013   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:25.930020   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:25.930029   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:25.936785   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:25.943643   22121 system_pods.go:59] 24 kube-system pods found
	I0916 10:41:25.943669   22121 system_pods.go:61] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:41:25.943674   22121 system_pods.go:61] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:41:25.943678   22121 system_pods.go:61] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:41:25.943682   22121 system_pods.go:61] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:41:25.943685   22121 system_pods.go:61] "etcd-ha-244475-m03" [e741d8c7-f12c-4fa1-b3cc-582043ca312d] Running
	I0916 10:41:25.943688   22121 system_pods.go:61] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:41:25.943691   22121 system_pods.go:61] "kindnet-rzwwj" [ffe109a7-d477-4b8a-ab62-4e4ceec1b4ed] Running
	I0916 10:41:25.943695   22121 system_pods.go:61] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:41:25.943698   22121 system_pods.go:61] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:41:25.943701   22121 system_pods.go:61] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:41:25.943704   22121 system_pods.go:61] "kube-apiserver-ha-244475-m03" [469c5743-509f-4c1c-b46e-fa3e6e79a673] Running
	I0916 10:41:25.943707   22121 system_pods.go:61] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:41:25.943710   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:41:25.943713   22121 system_pods.go:61] "kube-controller-manager-ha-244475-m03" [1054e7df-9598-41de-a412-f18d3ffff1cb] Running
	I0916 10:41:25.943716   22121 system_pods.go:61] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:41:25.943719   22121 system_pods.go:61] "kube-proxy-g5v5l" [102f8d6f-4cb4-4c59-ae99-acccabb9fb8e] Running
	I0916 10:41:25.943723   22121 system_pods.go:61] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:41:25.943726   22121 system_pods.go:61] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:41:25.943729   22121 system_pods.go:61] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:41:25.943731   22121 system_pods.go:61] "kube-scheduler-ha-244475-m03" [90b5bffb-165c-4620-b90a-e9f1d3f4c323] Running
	I0916 10:41:25.943734   22121 system_pods.go:61] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:41:25.943737   22121 system_pods.go:61] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:41:25.943740   22121 system_pods.go:61] "kube-vip-ha-244475-m03" [b507cf83-f056-4ab3-b276-4f477ee77747] Running
	I0916 10:41:25.943743   22121 system_pods.go:61] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:41:25.943748   22121 system_pods.go:74] duration metric: took 188.180661ms to wait for pod list to return data ...
	I0916 10:41:25.943758   22121 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:41:26.130184   22121 request.go:632] Waited for 186.361022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:26.130240   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:26.130247   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.130256   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.130263   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.136218   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:26.136355   22121 default_sa.go:45] found service account: "default"
	I0916 10:41:26.136373   22121 default_sa.go:55] duration metric: took 192.608031ms for default service account to be created ...
	I0916 10:41:26.136384   22121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:41:26.329960   22121 request.go:632] Waited for 193.503475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:26.330035   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I0916 10:41:26.330046   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.330056   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.330062   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.336265   22121 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:26.343431   22121 system_pods.go:86] 24 kube-system pods found
	I0916 10:41:26.343459   22121 system_pods.go:89] "coredns-7c65d6cfc9-lzrg2" [51962d07-f38a-4db3-86ee-af3d954dbec6] Running
	I0916 10:41:26.343464   22121 system_pods.go:89] "coredns-7c65d6cfc9-m8fd7" [fc549709-ddc0-4684-b377-46d33ef8f03d] Running
	I0916 10:41:26.343468   22121 system_pods.go:89] "etcd-ha-244475" [08595572-facf-419a-93e3-9b0ea1938f08] Running
	I0916 10:41:26.343471   22121 system_pods.go:89] "etcd-ha-244475-m02" [d58c0d1e-ef12-4e50-b4d8-86f60754b93d] Running
	I0916 10:41:26.343474   22121 system_pods.go:89] "etcd-ha-244475-m03" [e741d8c7-f12c-4fa1-b3cc-582043ca312d] Running
	I0916 10:41:26.343477   22121 system_pods.go:89] "kindnet-7v2cl" [764ade4d-cbcd-42b8-9d68-b4ed502de9eb] Running
	I0916 10:41:26.343481   22121 system_pods.go:89] "kindnet-rzwwj" [ffe109a7-d477-4b8a-ab62-4e4ceec1b4ed] Running
	I0916 10:41:26.343485   22121 system_pods.go:89] "kindnet-xvp82" [3140a3e7-ac3b-4882-b150-20a313e2f20c] Running
	I0916 10:41:26.343490   22121 system_pods.go:89] "kube-apiserver-ha-244475" [b0ea2226-42de-4488-b8fb-73a6828320fc] Running
	I0916 10:41:26.343495   22121 system_pods.go:89] "kube-apiserver-ha-244475-m02" [1e384f04-33c2-49f1-afc0-48807202a04c] Running
	I0916 10:41:26.343501   22121 system_pods.go:89] "kube-apiserver-ha-244475-m03" [469c5743-509f-4c1c-b46e-fa3e6e79a673] Running
	I0916 10:41:26.343509   22121 system_pods.go:89] "kube-controller-manager-ha-244475" [98883403-0a22-486c-aa3a-a3720a5cbfb7] Running
	I0916 10:41:26.343515   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m02" [9e148533-4562-426b-9e8b-3aead772739b] Running
	I0916 10:41:26.343524   22121 system_pods.go:89] "kube-controller-manager-ha-244475-m03" [1054e7df-9598-41de-a412-f18d3ffff1cb] Running
	I0916 10:41:26.343530   22121 system_pods.go:89] "kube-proxy-crttt" [0c8cad04-2c64-42f9-85e2-5e4fbfe7961d] Running
	I0916 10:41:26.343536   22121 system_pods.go:89] "kube-proxy-g5v5l" [102f8d6f-4cb4-4c59-ae99-acccabb9fb8e] Running
	I0916 10:41:26.343548   22121 system_pods.go:89] "kube-proxy-t454b" [49b7dda6-9a09-4b7d-8adc-568f2fa10ad6] Running
	I0916 10:41:26.343554   22121 system_pods.go:89] "kube-scheduler-ha-244475" [c9527c08-f10b-4d85-9f72-0d0893297b14] Running
	I0916 10:41:26.343558   22121 system_pods.go:89] "kube-scheduler-ha-244475-m02" [bf332de1-6793-4485-9d93-38368d86c6a5] Running
	I0916 10:41:26.343563   22121 system_pods.go:89] "kube-scheduler-ha-244475-m03" [90b5bffb-165c-4620-b90a-e9f1d3f4c323] Running
	I0916 10:41:26.343567   22121 system_pods.go:89] "kube-vip-ha-244475" [94b4d383-a0e8-4686-b108-923c0235f371] Running
	I0916 10:41:26.343570   22121 system_pods.go:89] "kube-vip-ha-244475-m02" [6f0a6023-be76-458b-9344-ff51083a217e] Running
	I0916 10:41:26.343573   22121 system_pods.go:89] "kube-vip-ha-244475-m03" [b507cf83-f056-4ab3-b276-4f477ee77747] Running
	I0916 10:41:26.343578   22121 system_pods.go:89] "storage-provisioner" [2e1264f7-2197-4821-8238-82fac849b145] Running
	I0916 10:41:26.343589   22121 system_pods.go:126] duration metric: took 207.195971ms to wait for k8s-apps to be running ...
	I0916 10:41:26.343599   22121 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:41:26.343650   22121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:41:26.359495   22121 system_svc.go:56] duration metric: took 15.88709ms WaitForService to wait for kubelet
	I0916 10:41:26.359526   22121 kubeadm.go:582] duration metric: took 23.064804714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:26.359547   22121 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:41:26.529951   22121 request.go:632] Waited for 170.330403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I0916 10:41:26.530026   22121 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I0916 10:41:26.530033   22121 round_trippers.go:469] Request Headers:
	I0916 10:41:26.530043   22121 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:26.530050   22121 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:26.536030   22121 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:41:26.537495   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537520   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537534   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537539   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537545   22121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:41:26.537549   22121 node_conditions.go:123] node cpu capacity is 2
	I0916 10:41:26.537554   22121 node_conditions.go:105] duration metric: took 178.001679ms to run NodePressure ...
	I0916 10:41:26.537572   22121 start.go:241] waiting for startup goroutines ...
	I0916 10:41:26.537599   22121 start.go:255] writing updated cluster config ...
	I0916 10:41:26.538305   22121 ssh_runner.go:195] Run: rm -f paused
	I0916 10:41:26.548959   22121 out.go:177] * Done! kubectl is now configured to use "ha-244475" cluster and "default" namespace by default
	E0916 10:41:26.550066   22121 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.804615718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9eeaf62-6790-4ab0-a31f-6850b5f5fbc8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.804827278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9eeaf62-6790-4ab0-a31f-6850b5f5fbc8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.842951585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5eb60d2f-b4b9-45e0-a8aa-b217aa2214af name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.843037876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5eb60d2f-b4b9-45e0-a8aa-b217aa2214af name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.844413120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17313288-770e-4c45-be1d-8efbed3bdab4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.844978906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483560844954605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17313288-770e-4c45-be1d-8efbed3bdab4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.846477169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af0f56e2-21d2-4e1c-bbe7-e4a7b5b5387a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.846593014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af0f56e2-21d2-4e1c-bbe7-e4a7b5b5387a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.846826424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af0f56e2-21d2-4e1c-bbe7-e4a7b5b5387a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.888808571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02b17458-61c8-4e42-ab4d-7bb5871c8c81 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.888879238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02b17458-61c8-4e42-ab4d-7bb5871c8c81 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.890691380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76d45c11-b28e-49d7-8ffa-9c34735b952b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.891492820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483560891466297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76d45c11-b28e-49d7-8ffa-9c34735b952b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.892131611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aaaf9f1-ca61-4541-8d28-dfafde680eba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.892184782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aaaf9f1-ca61-4541-8d28-dfafde680eba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.892407106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aaaf9f1-ca61-4541-8d28-dfafde680eba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.931276981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21ab9b4e-c3e0-483b-b3b3-5998db80607a name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.931356242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21ab9b4e-c3e0-483b-b3b3-5998db80607a name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.933396579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=240fb195-c63b-4dd9-91c3-94982e1462fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.933937325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483560933913252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=240fb195-c63b-4dd9-91c3-94982e1462fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.934737388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9913bac1-0dc5-4107-8b57-60a2403ffb29 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.934789524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9913bac1-0dc5-4107-8b57-60a2403ffb29 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.935001011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483289055277109,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151504105266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483151498442305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99,PodSandboxId:66086953ec65ff443b277a25da98697cdab5664f13ce0f035b2961dd540a8f99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726483149914383595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17264831
38080656744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483137842379282,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045,PodSandboxId:f76913fe7302a4fa8d7619af601b5246c7ab7fd3482731bf5f2128c885274602,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483128784978351,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcb42d1621bd2afde7f39a79dd541d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483126505887348,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1,PodSandboxId:ec0d4cf0dd9b785181c7ac24b3174a788202f97398df008bd80c06f6e612c16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483126417390372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483126350971239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113,PodSandboxId:fad8ac85cdf54bd87da40cadbda9fd41ab84e1550361b91b5242a7ba9f4ba28b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483126307755222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9913bac1-0dc5-4107-8b57-60a2403ffb29 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.948193880Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=4596e294-0634-4872-bb01-20800bce5796 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:46:00 ha-244475 crio[667]: time="2024-09-16 10:46:00.948302941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4596e294-0634-4872-bb01-20800bce5796 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c701fcd74aba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   ed1838f7506b4       busybox-7dff88458-d4m5s
	034030626ec02       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   159730a21bea6       coredns-7c65d6cfc9-m8fd7
	7f78c5e4a3a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   4d8c4f0a29bb7       coredns-7c65d6cfc9-lzrg2
	b16f64da09fae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   66086953ec65f       storage-provisioner
	ac63170bf5bb3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   9c8ab7a98f749       kindnet-7v2cl
	6e6d69b26d5c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   3fbb7c8e9af71       kube-proxy-crttt
	62c031e0ed0a9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   f76913fe7302a       kube-vip-ha-244475
	a0223669288e2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   42a76bc40dc3e       kube-scheduler-ha-244475
	13162d4bf94f7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   ec0d4cf0dd9b7       kube-apiserver-ha-244475
	308650af833f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   693cfec22177d       etcd-ha-244475
	f16e87fb57b2b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   fad8ac85cdf54       kube-controller-manager-ha-244475
	
	
	==> coredns [034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3] <==
	[INFO] 10.244.2.2:43047 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.055244509s
	[INFO] 10.244.2.2:43779 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285925s
	[INFO] 10.244.2.2:49571 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000283044s
	[INFO] 10.244.2.2:57761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004222785s
	[INFO] 10.244.2.2:42931 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200783s
	[INFO] 10.244.0.4:33694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014309s
	[INFO] 10.244.0.4:35532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107639s
	[INFO] 10.244.0.4:53168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009525s
	[INFO] 10.244.0.4:50253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250965s
	[INFO] 10.244.0.4:40357 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089492s
	[INFO] 10.244.1.2:49152 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001985919s
	[INFO] 10.244.1.2:50396 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132748s
	[INFO] 10.244.2.2:38313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000951s
	[INFO] 10.244.0.4:43336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168268s
	[INFO] 10.244.0.4:44949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123895s
	[INFO] 10.244.0.4:52348 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107748s
	[INFO] 10.244.1.2:36649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286063s
	[INFO] 10.244.1.2:42747 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082265s
	[INFO] 10.244.2.2:45891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018425s
	[INFO] 10.244.2.2:53625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126302s
	[INFO] 10.244.2.2:44397 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109098s
	[INFO] 10.244.0.4:39956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013935s
	[INFO] 10.244.0.4:39139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008694s
	[INFO] 10.244.0.4:38933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060589s
	[INFO] 10.244.1.2:36849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146451s
	
	
	==> coredns [7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465] <==
	[INFO] 10.244.0.4:51676 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000096142s
	[INFO] 10.244.1.2:33245 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001877876s
	[INFO] 10.244.2.2:52615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191836s
	[INFO] 10.244.2.2:49834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166519s
	[INFO] 10.244.2.2:39495 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127494s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001694487s
	[INFO] 10.244.0.4:36178 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091958s
	[INFO] 10.244.0.4:33247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160731s
	[INFO] 10.244.1.2:52512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150889s
	[INFO] 10.244.1.2:43450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182534s
	[INFO] 10.244.1.2:56403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150359s
	[INFO] 10.244.1.2:51246 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230547s
	[INFO] 10.244.1.2:39220 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090721s
	[INFO] 10.244.1.2:41766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155057s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153103s
	[INFO] 10.244.2.2:44469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099361s
	[INFO] 10.244.2.2:52465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086382s
	[INFO] 10.244.0.4:36474 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117775s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142151s
	[INFO] 10.244.1.2:39272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113629s
	[INFO] 10.244.2.2:43223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141566s
	[INFO] 10.244.0.4:36502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000282073s
	[INFO] 10.244.1.2:60302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207499s
	[INFO] 10.244.1.2:49950 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184993s
	[INFO] 10.244.1.2:54052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094371s
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:45:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:56 +0000   Mon, 16 Sep 2024 10:39:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m4s
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m4s
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m9s
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m4s
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m2s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m16s (x7 over 7m16s)  kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m16s (x8 over 7m16s)  kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m16s (x8 over 7m16s)  kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m9s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m9s                   kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m9s                   kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m9s                   kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m5s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal  NodeReady                6m52s                  kubelet          Node ha-244475 status is now: NodeReady
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 10:41:47 +0000   Mon, 16 Sep 2024 10:43:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d827e65a-7fd8-4399-b348-231b704c25ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-244475-m02 status is now: NodeNotReady
	
	
	Name:               ha-244475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:45:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:40:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:29 +0000   Mon, 16 Sep 2024 10:41:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-244475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d01912e060494092a8b6a2df64a0a30c
	  System UUID:                d01912e0-6049-4092-a8b6-a2df64a0a30c
	  Boot ID:                    1fb9da41-3fb9-4db3-bca0-b0c15d7a9875
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7bhqg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 etcd-ha-244475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m
	  kube-system                 kindnet-rzwwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m2s
	  kube-system                 kube-apiserver-ha-244475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-controller-manager-ha-244475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-proxy-g5v5l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-scheduler-ha-244475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-vip-ha-244475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node ha-244475-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m                   node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal  RegisteredNode           4m58s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal  RegisteredNode           4m54s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:45:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:41:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:42:30 +0000   Mon, 16 Sep 2024 10:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    4513a05d-6164-4c3b-91e3-07f7c103c2f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dflt4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-proxy-kp7hv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m2s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m2s)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m2s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal  NodeReady                3m42s                kubelet          Node ha-244475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050568] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.803306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.430603] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.601752] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.139824] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054792] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058211] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173707] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.144769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.277555] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.915448] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.568561] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067639] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.970048] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3] <==
	{"level":"warn","ts":"2024-09-16T10:46:01.195987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.200447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.214979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.222779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.230955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.234571Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.237790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.243329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.250083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.275803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.281681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.290299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.291605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.305925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.327047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.342213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.349015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.353783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.357396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.361856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.368200Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.374973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.387770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.389877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T10:46:01.406678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:46:01 up 7 min,  0 users,  load average: 0.43, 0.30, 0.14
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913] <==
	I0916 10:45:29.306731       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:45:39.306851       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:45:39.306973       1 main.go:299] handling current node
	I0916 10:45:39.307005       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:45:39.307023       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:45:39.307191       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:45:39.307213       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:45:39.307280       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:45:39.307299       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:45:49.306315       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:45:49.306369       1 main.go:299] handling current node
	I0916 10:45:49.306386       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:45:49.306391       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:45:49.306575       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:45:49.306600       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:45:49.306662       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:45:49.306683       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:45:59.301480       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:45:59.301576       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:45:59.301736       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:45:59.301762       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:45:59.301834       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:45:59.301858       1 main.go:299] handling current node
	I0916 10:45:59.301871       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:45:59.301876       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1] <==
	W0916 10:38:51.442192       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I0916 10:38:51.443345       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:38:51.448673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:38:51.657156       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:38:52.610073       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:38:52.629898       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:38:52.640941       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:38:57.207096       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:38:57.359795       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0916 10:39:51.439268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	E0916 10:41:30.050430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60486: use of closed network connection
	E0916 10:41:30.242968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60496: use of closed network connection
	E0916 10:41:30.422776       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60516: use of closed network connection
	E0916 10:41:30.667331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60540: use of closed network connection
	E0916 10:41:30.849977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60570: use of closed network connection
	E0916 10:41:31.026403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60598: use of closed network connection
	E0916 10:41:31.216159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60626: use of closed network connection
	E0916 10:41:31.408973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60648: use of closed network connection
	E0916 10:41:31.595323       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60664: use of closed network connection
	E0916 10:41:31.892210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33810: use of closed network connection
	E0916 10:41:32.120845       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33824: use of closed network connection
	E0916 10:41:32.318310       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33836: use of closed network connection
	E0916 10:41:32.517544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33856: use of closed network connection
	E0916 10:41:32.715949       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33878: use of closed network connection
	E0916 10:41:32.888744       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	
	
	==> kube-controller-manager [f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113] <==
	I0916 10:41:59.913033       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-244475-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:41:59.913138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:41:59.913216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:41:59.930942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:00.175642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:00.590484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:01.490254       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-244475-m04"
	I0916 10:42:01.528827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.011238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.079872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.261410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:03.376315       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:10.010776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:19.018320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:19.018457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:42:19.032789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:21.506056       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:42:30.158122       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:43:21.535925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:43:21.536431       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:43:21.581714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:43:21.707782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.322573ms"
	I0916 10:43:21.708000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="116.003µs"
	I0916 10:43:23.093063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:43:26.726408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	
	
	==> kube-proxy [6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:38:58.381104       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:38:58.405774       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E0916 10:38:58.405958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:38:58.486128       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:38:58.486191       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:38:58.486214       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:38:58.488718       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:38:58.489862       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:38:58.489894       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:38:58.500489       1 config.go:199] "Starting service config controller"
	I0916 10:38:58.500804       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:38:58.501030       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:38:58.501051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:38:58.502033       1 config.go:328] "Starting node config controller"
	I0916 10:38:58.502063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:38:58.601173       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:38:58.601274       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:38:58.602581       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb] <==
	E0916 10:38:50.527717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.585028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:38:50.585078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.611653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:38:50.611726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.650971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:38:50.651023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.696031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:38:50.696092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.761221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:38:50.761274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.985092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:38:50.985144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:50.991955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:38:50.992011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.039856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:38:51.039907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.293677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:38:51.293783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:38:53.269920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:27.446213       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5" pod="default/busybox-7dff88458-7bhqg" assumedNode="ha-244475-m03" currentNode="ha-244475-m02"
	E0916 10:41:27.456948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m02"
	E0916 10:41:27.457071       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5(default/busybox-7dff88458-7bhqg) was assumed on ha-244475-m02 but assigned to ha-244475-m03" pod="default/busybox-7dff88458-7bhqg"
	E0916 10:41:27.457108       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" pod="default/busybox-7dff88458-7bhqg"
	I0916 10:41:27.457173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m03"
	
	
	==> kubelet <==
	Sep 16 10:44:42 ha-244475 kubelet[1309]: E0916 10:44:42.689565    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483482688660947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:52 ha-244475 kubelet[1309]: E0916 10:44:52.621462    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:44:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:44:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:44:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:44:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:44:52 ha-244475 kubelet[1309]: E0916 10:44:52.692628    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483492691856334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:44:52 ha-244475 kubelet[1309]: E0916 10:44:52.692654    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483492691856334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:02 ha-244475 kubelet[1309]: E0916 10:45:02.694249    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483502693420181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:02 ha-244475 kubelet[1309]: E0916 10:45:02.694279    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483502693420181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:12 ha-244475 kubelet[1309]: E0916 10:45:12.696846    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483512696217097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:12 ha-244475 kubelet[1309]: E0916 10:45:12.696888    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483512696217097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:22 ha-244475 kubelet[1309]: E0916 10:45:22.698441    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483522697899590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:22 ha-244475 kubelet[1309]: E0916 10:45:22.698938    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483522697899590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:32 ha-244475 kubelet[1309]: E0916 10:45:32.701547    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483532701178051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:32 ha-244475 kubelet[1309]: E0916 10:45:32.701752    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483532701178051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:42 ha-244475 kubelet[1309]: E0916 10:45:42.705642    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483542704490888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:42 ha-244475 kubelet[1309]: E0916 10:45:42.706453    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483542704490888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:52 ha-244475 kubelet[1309]: E0916 10:45:52.621800    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:45:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:45:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:45:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:45:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:45:52 ha-244475 kubelet[1309]: E0916 10:45:52.709969    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483552709386375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:45:52 ha-244475 kubelet[1309]: E0916 10:45:52.709997    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483552709386375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (476.929µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (402.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-244475 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-244475 -v=7 --alsologtostderr
E0916 10:46:28.278598   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:55.979510   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-244475 -v=7 --alsologtostderr: exit status 82 (2m1.840846449s)

                                                
                                                
-- stdout --
	* Stopping node "ha-244475-m04"  ...
	* Stopping node "ha-244475-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:46:02.784631   27916 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:46:02.784734   27916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:02.784742   27916 out.go:358] Setting ErrFile to fd 2...
	I0916 10:46:02.784746   27916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:02.784928   27916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:46:02.785159   27916 out.go:352] Setting JSON to false
	I0916 10:46:02.785239   27916 mustload.go:65] Loading cluster: ha-244475
	I0916 10:46:02.785639   27916 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:46:02.785729   27916 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:46:02.785910   27916 mustload.go:65] Loading cluster: ha-244475
	I0916 10:46:02.786036   27916 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:46:02.786059   27916 stop.go:39] StopHost: ha-244475-m04
	I0916 10:46:02.786429   27916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:46:02.786467   27916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:46:02.801375   27916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
	I0916 10:46:02.801883   27916 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:46:02.802410   27916 main.go:141] libmachine: Using API Version  1
	I0916 10:46:02.802434   27916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:46:02.802748   27916 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:46:02.805298   27916 out.go:177] * Stopping node "ha-244475-m04"  ...
	I0916 10:46:02.806824   27916 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 10:46:02.806858   27916 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:46:02.807104   27916 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 10:46:02.807141   27916 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:46:02.809796   27916 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:46:02.810204   27916 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:41:48 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:46:02.810234   27916 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:46:02.810373   27916 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:46:02.810513   27916 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:46:02.810656   27916 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:46:02.810772   27916 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:46:02.896140   27916 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 10:46:02.949467   27916 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 10:46:03.004361   27916 main.go:141] libmachine: Stopping "ha-244475-m04"...
	I0916 10:46:03.004411   27916 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:46:03.005797   27916 main.go:141] libmachine: (ha-244475-m04) Calling .Stop
	I0916 10:46:03.008818   27916 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 0/120
	I0916 10:46:04.175889   27916 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:46:04.177296   27916 main.go:141] libmachine: Machine "ha-244475-m04" was stopped.
	I0916 10:46:04.177314   27916 stop.go:75] duration metric: took 1.370491083s to stop
	I0916 10:46:04.177335   27916 stop.go:39] StopHost: ha-244475-m03
	I0916 10:46:04.177634   27916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:46:04.177689   27916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:46:04.193205   27916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I0916 10:46:04.193766   27916 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:46:04.194306   27916 main.go:141] libmachine: Using API Version  1
	I0916 10:46:04.194326   27916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:46:04.194655   27916 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:46:04.196811   27916 out.go:177] * Stopping node "ha-244475-m03"  ...
	I0916 10:46:04.198239   27916 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 10:46:04.198263   27916 main.go:141] libmachine: (ha-244475-m03) Calling .DriverName
	I0916 10:46:04.198478   27916 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 10:46:04.198498   27916 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHHostname
	I0916 10:46:04.201547   27916 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:46:04.201984   27916 main.go:141] libmachine: (ha-244475-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:15:60", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:40:24 +0000 UTC Type:0 Mac:52:54:00:e0:15:60 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-244475-m03 Clientid:01:52:54:00:e0:15:60}
	I0916 10:46:04.202016   27916 main.go:141] libmachine: (ha-244475-m03) DBG | domain ha-244475-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:e0:15:60 in network mk-ha-244475
	I0916 10:46:04.202178   27916 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHPort
	I0916 10:46:04.202324   27916 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHKeyPath
	I0916 10:46:04.202446   27916 main.go:141] libmachine: (ha-244475-m03) Calling .GetSSHUsername
	I0916 10:46:04.202586   27916 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m03/id_rsa Username:docker}
	I0916 10:46:04.284042   27916 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 10:46:04.339184   27916 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 10:46:04.392943   27916 main.go:141] libmachine: Stopping "ha-244475-m03"...
	I0916 10:46:04.392976   27916 main.go:141] libmachine: (ha-244475-m03) Calling .GetState
	I0916 10:46:04.394478   27916 main.go:141] libmachine: (ha-244475-m03) Calling .Stop
	I0916 10:46:04.397991   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 0/120
	I0916 10:46:05.399397   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 1/120
	I0916 10:46:06.400702   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 2/120
	I0916 10:46:07.402002   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 3/120
	I0916 10:46:08.403542   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 4/120
	I0916 10:46:09.405616   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 5/120
	I0916 10:46:10.407171   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 6/120
	I0916 10:46:11.408597   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 7/120
	I0916 10:46:12.409968   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 8/120
	I0916 10:46:13.411553   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 9/120
	I0916 10:46:14.413556   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 10/120
	I0916 10:46:15.415190   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 11/120
	I0916 10:46:16.416638   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 12/120
	I0916 10:46:17.418237   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 13/120
	I0916 10:46:18.419638   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 14/120
	I0916 10:46:19.421494   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 15/120
	I0916 10:46:20.422815   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 16/120
	I0916 10:46:21.424112   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 17/120
	I0916 10:46:22.425433   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 18/120
	I0916 10:46:23.426731   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 19/120
	I0916 10:46:24.428103   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 20/120
	I0916 10:46:25.429423   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 21/120
	I0916 10:46:26.431053   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 22/120
	I0916 10:46:27.432396   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 23/120
	I0916 10:46:28.434102   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 24/120
	I0916 10:46:29.436168   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 25/120
	I0916 10:46:30.438429   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 26/120
	I0916 10:46:31.439828   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 27/120
	I0916 10:46:32.441344   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 28/120
	I0916 10:46:33.443658   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 29/120
	I0916 10:46:34.445183   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 30/120
	I0916 10:46:35.446757   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 31/120
	I0916 10:46:36.448280   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 32/120
	I0916 10:46:37.449796   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 33/120
	I0916 10:46:38.451228   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 34/120
	I0916 10:46:39.452974   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 35/120
	I0916 10:46:40.454359   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 36/120
	I0916 10:46:41.455608   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 37/120
	I0916 10:46:42.456914   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 38/120
	I0916 10:46:43.458090   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 39/120
	I0916 10:46:44.459715   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 40/120
	I0916 10:46:45.461018   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 41/120
	I0916 10:46:46.462232   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 42/120
	I0916 10:46:47.463599   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 43/120
	I0916 10:46:48.464852   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 44/120
	I0916 10:46:49.467111   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 45/120
	I0916 10:46:50.468423   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 46/120
	I0916 10:46:51.469914   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 47/120
	I0916 10:46:52.471319   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 48/120
	I0916 10:46:53.472432   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 49/120
	I0916 10:46:54.474070   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 50/120
	I0916 10:46:55.475548   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 51/120
	I0916 10:46:56.477004   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 52/120
	I0916 10:46:57.478454   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 53/120
	I0916 10:46:58.479814   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 54/120
	I0916 10:46:59.481627   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 55/120
	I0916 10:47:00.482978   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 56/120
	I0916 10:47:01.484385   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 57/120
	I0916 10:47:02.485602   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 58/120
	I0916 10:47:03.487082   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 59/120
	I0916 10:47:04.488948   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 60/120
	I0916 10:47:05.490368   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 61/120
	I0916 10:47:06.491570   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 62/120
	I0916 10:47:07.492936   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 63/120
	I0916 10:47:08.494238   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 64/120
	I0916 10:47:09.496042   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 65/120
	I0916 10:47:10.497289   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 66/120
	I0916 10:47:11.498545   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 67/120
	I0916 10:47:12.499767   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 68/120
	I0916 10:47:13.501163   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 69/120
	I0916 10:47:14.502721   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 70/120
	I0916 10:47:15.504058   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 71/120
	I0916 10:47:16.505469   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 72/120
	I0916 10:47:17.507572   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 73/120
	I0916 10:47:18.508828   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 74/120
	I0916 10:47:19.510618   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 75/120
	I0916 10:47:20.512674   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 76/120
	I0916 10:47:21.513997   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 77/120
	I0916 10:47:22.515379   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 78/120
	I0916 10:47:23.516682   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 79/120
	I0916 10:47:24.518318   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 80/120
	I0916 10:47:25.519525   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 81/120
	I0916 10:47:26.520705   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 82/120
	I0916 10:47:27.522092   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 83/120
	I0916 10:47:28.523348   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 84/120
	I0916 10:47:29.524788   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 85/120
	I0916 10:47:30.526197   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 86/120
	I0916 10:47:31.527455   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 87/120
	I0916 10:47:32.528713   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 88/120
	I0916 10:47:33.530061   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 89/120
	I0916 10:47:34.531455   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 90/120
	I0916 10:47:35.532871   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 91/120
	I0916 10:47:36.534248   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 92/120
	I0916 10:47:37.535639   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 93/120
	I0916 10:47:38.536952   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 94/120
	I0916 10:47:39.538841   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 95/120
	I0916 10:47:40.540202   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 96/120
	I0916 10:47:41.541526   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 97/120
	I0916 10:47:42.543665   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 98/120
	I0916 10:47:43.545730   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 99/120
	I0916 10:47:44.547546   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 100/120
	I0916 10:47:45.548838   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 101/120
	I0916 10:47:46.550744   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 102/120
	I0916 10:47:47.552157   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 103/120
	I0916 10:47:48.553651   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 104/120
	I0916 10:47:49.555351   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 105/120
	I0916 10:47:50.556689   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 106/120
	I0916 10:47:51.558141   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 107/120
	I0916 10:47:52.559626   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 108/120
	I0916 10:47:53.561111   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 109/120
	I0916 10:47:54.562802   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 110/120
	I0916 10:47:55.564430   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 111/120
	I0916 10:47:56.565813   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 112/120
	I0916 10:47:57.567312   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 113/120
	I0916 10:47:58.568713   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 114/120
	I0916 10:47:59.570217   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 115/120
	I0916 10:48:00.571598   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 116/120
	I0916 10:48:01.572945   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 117/120
	I0916 10:48:02.574300   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 118/120
	I0916 10:48:03.575793   27916 main.go:141] libmachine: (ha-244475-m03) Waiting for machine to stop 119/120
	I0916 10:48:04.576900   27916 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 10:48:04.576947   27916 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0916 10:48:04.578956   27916 out.go:201] 
	W0916 10:48:04.580413   27916 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0916 10:48:04.580436   27916 out.go:270] * 
	* 
	W0916 10:48:04.582702   27916 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:48:04.584320   27916 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-244475 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-244475 --wait=true -v=7 --alsologtostderr
E0916 10:50:08.820595   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:51:28.277968   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:51:31.885317   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-244475 --wait=true -v=7 --alsologtostderr: (4m38.569738999s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-244475
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.783266687s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m04 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp testdata/cp-test.txt                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m03 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-244475 node stop m02 -v=7                                                     | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-244475 node start m02 -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475 -v=7                                                           | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-244475 -v=7                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-244475 --wait=true -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:48:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:48:04.629611   28382 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:48:04.629751   28382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:48:04.629762   28382 out.go:358] Setting ErrFile to fd 2...
	I0916 10:48:04.629769   28382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:48:04.629972   28382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:48:04.630523   28382 out.go:352] Setting JSON to false
	I0916 10:48:04.631433   28382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1835,"bootTime":1726481850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:48:04.631527   28382 start.go:139] virtualization: kvm guest
	I0916 10:48:04.633814   28382 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:48:04.635027   28382 notify.go:220] Checking for updates...
	I0916 10:48:04.635032   28382 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:48:04.636319   28382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:48:04.637618   28382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:48:04.638937   28382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:48:04.640222   28382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:48:04.641463   28382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:48:04.643097   28382 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:48:04.643194   28382 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:48:04.643664   28382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:48:04.643720   28382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:48:04.660057   28382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0916 10:48:04.660593   28382 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:48:04.661160   28382 main.go:141] libmachine: Using API Version  1
	I0916 10:48:04.661198   28382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:48:04.661616   28382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:48:04.661813   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.697772   28382 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:48:04.699530   28382 start.go:297] selected driver: kvm2
	I0916 10:48:04.699547   28382 start.go:901] validating driver "kvm2" against &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:48:04.699689   28382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:48:04.700019   28382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:04.700102   28382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:48:04.715527   28382 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:48:04.716227   28382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:48:04.716263   28382 cni.go:84] Creating CNI manager for ""
	I0916 10:48:04.716312   28382 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:48:04.716367   28382 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:48:04.716493   28382 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:04.718937   28382 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:48:04.720335   28382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:48:04.720368   28382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:48:04.720379   28382 cache.go:56] Caching tarball of preloaded images
	I0916 10:48:04.720467   28382 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:48:04.720479   28382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:48:04.720587   28382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:48:04.720801   28382 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:48:04.720863   28382 start.go:364] duration metric: took 41.906µs to acquireMachinesLock for "ha-244475"
	I0916 10:48:04.720882   28382 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:48:04.720887   28382 fix.go:54] fixHost starting: 
	I0916 10:48:04.721282   28382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:48:04.721314   28382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:48:04.735751   28382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0916 10:48:04.736248   28382 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:48:04.736739   28382 main.go:141] libmachine: Using API Version  1
	I0916 10:48:04.736771   28382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:48:04.737094   28382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:48:04.737279   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.737431   28382 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:48:04.738886   28382 fix.go:112] recreateIfNeeded on ha-244475: state=Running err=<nil>
	W0916 10:48:04.738909   28382 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:48:04.740961   28382 out.go:177] * Updating the running kvm2 "ha-244475" VM ...
	I0916 10:48:04.742320   28382 machine.go:93] provisionDockerMachine start ...
	I0916 10:48:04.742348   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.742548   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:04.744733   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.745067   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:04.745093   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.745218   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:04.745382   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.745523   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.745653   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:04.745797   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:04.745999   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:04.746012   28382 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:48:04.866195   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:48:04.866247   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:04.866489   28382 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:48:04.866520   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:04.866739   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:04.869344   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.869776   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:04.869798   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.869969   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:04.870127   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.870289   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.870419   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:04.870579   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:04.870744   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:04.870756   28382 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:48:05.005091   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:48:05.005118   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.007741   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.008168   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.008192   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.008399   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.008580   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.008720   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.008818   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.008958   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:05.009165   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:05.009182   28382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:48:05.126206   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:48:05.126232   28382 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:48:05.126289   28382 buildroot.go:174] setting up certificates
	I0916 10:48:05.126297   28382 provision.go:84] configureAuth start
	I0916 10:48:05.126306   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:05.126557   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:48:05.128973   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.129406   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.129434   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.129547   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.131762   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.132175   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.132198   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.132394   28382 provision.go:143] copyHostCerts
	I0916 10:48:05.132459   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:48:05.132520   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:48:05.132531   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:48:05.132608   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:48:05.132692   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:48:05.132709   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:48:05.132716   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:48:05.132739   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:48:05.132778   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:48:05.132795   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:48:05.132803   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:48:05.132824   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:48:05.132867   28382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:48:05.230030   28382 provision.go:177] copyRemoteCerts
	I0916 10:48:05.230090   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:48:05.230124   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.232727   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.232996   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.233021   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.233228   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.233411   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.233854   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.233994   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:48:05.321368   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:48:05.321442   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:48:05.348483   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:48:05.348579   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:48:05.376610   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:48:05.376680   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:48:05.404845   28382 provision.go:87] duration metric: took 278.532484ms to configureAuth
	I0916 10:48:05.404874   28382 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:48:05.405088   28382 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:48:05.405170   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.407821   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.408170   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.408200   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.408395   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.408568   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.408725   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.408860   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.409024   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:05.409256   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:05.409278   28382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:49:36.136821   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:49:36.136864   28382 machine.go:96] duration metric: took 1m31.394528146s to provisionDockerMachine
	I0916 10:49:36.136875   28382 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:49:36.136885   28382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:36.136901   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.137195   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:36.137226   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.140151   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.140600   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.140633   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.140776   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.140974   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.141162   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.141297   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.229105   28382 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:49:36.233446   28382 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:49:36.233468   28382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:49:36.233521   28382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:49:36.233595   28382 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:49:36.233605   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:49:36.233712   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:49:36.243379   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:49:36.268390   28382 start.go:296] duration metric: took 131.49973ms for postStartSetup
	I0916 10:49:36.268431   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.268704   28382 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 10:49:36.268740   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.271523   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.272009   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.272032   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.272177   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.272383   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.272533   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.272679   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	W0916 10:49:36.359589   28382 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 10:49:36.359614   28382 fix.go:56] duration metric: took 1m31.638727744s for fixHost
	I0916 10:49:36.359635   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.362024   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.362345   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.362379   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.362437   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.362603   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.362772   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.362934   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.363065   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:36.363232   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:49:36.363242   28382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:49:36.478148   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483776.445441321
	
	I0916 10:49:36.478178   28382 fix.go:216] guest clock: 1726483776.445441321
	I0916 10:49:36.478185   28382 fix.go:229] Guest: 2024-09-16 10:49:36.445441321 +0000 UTC Remote: 2024-09-16 10:49:36.359621457 +0000 UTC m=+91.765044121 (delta=85.819864ms)
	I0916 10:49:36.478209   28382 fix.go:200] guest clock delta is within tolerance: 85.819864ms
	I0916 10:49:36.478215   28382 start.go:83] releasing machines lock for "ha-244475", held for 1m31.757340687s
	I0916 10:49:36.478246   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.478464   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:49:36.480946   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.481304   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.481330   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.481512   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.481984   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.482250   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.482367   28382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:49:36.482411   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.482451   28382 ssh_runner.go:195] Run: cat /version.json
	I0916 10:49:36.482475   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.485017   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485084   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485349   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.485372   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485438   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.485457   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485482   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.485617   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.485706   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.485783   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.485830   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.485895   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.485941   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.486045   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.566130   28382 ssh_runner.go:195] Run: systemctl --version
	I0916 10:49:36.595210   28382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:49:36.759288   28382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:49:36.765378   28382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:36.765456   28382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:36.775556   28382 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:36.775578   28382 start.go:495] detecting cgroup driver to use...
	I0916 10:49:36.775647   28382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:49:36.791549   28382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:49:36.805408   28382 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:49:36.805456   28382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:49:36.819777   28382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:49:36.834041   28382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:49:37.006927   28382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:49:37.154158   28382 docker.go:233] disabling docker service ...
	I0916 10:49:37.154233   28382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:49:37.172237   28382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:49:37.187140   28382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:49:37.335249   28382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:49:37.485651   28382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:49:37.500949   28382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:37.520699   28382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:49:37.520778   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.532711   28382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:49:37.532779   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.545325   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.557100   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.568745   28382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:37.580983   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.592790   28382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.604166   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.615655   28382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:37.625740   28382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:37.636174   28382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:37.785177   28382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:49:42.995342   28382 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.210133239s)
	I0916 10:49:42.995373   28382 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:49:42.995414   28382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:49:43.001465   28382 start.go:563] Will wait 60s for crictl version
	I0916 10:49:43.001535   28382 ssh_runner.go:195] Run: which crictl
	I0916 10:49:43.005982   28382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:49:43.050539   28382 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:49:43.050628   28382 ssh_runner.go:195] Run: crio --version
	I0916 10:49:43.079811   28382 ssh_runner.go:195] Run: crio --version
	I0916 10:49:43.111377   28382 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:49:43.112594   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:49:43.115110   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:43.115409   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:43.115437   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:43.115643   28382 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:43.120664   28382 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:43.120799   28382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:49:43.120843   28382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:43.174107   28382 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:49:43.174132   28382 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:49:43.174191   28382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:43.209963   28382 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:49:43.209985   28382 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:49:43.209995   28382 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:49:43.210109   28382 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:49:43.210169   28382 ssh_runner.go:195] Run: crio config
	I0916 10:49:43.257466   28382 cni.go:84] Creating CNI manager for ""
	I0916 10:49:43.257492   28382 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:49:43.257503   28382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:43.257526   28382 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:43.257697   28382 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:43.257719   28382 kube-vip.go:115] generating kube-vip config ...
	I0916 10:49:43.257765   28382 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:49:43.269960   28382 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:49:43.270094   28382 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:49:43.270162   28382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:43.280474   28382 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:43.280563   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:49:43.290395   28382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:49:43.307234   28382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:43.324085   28382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:49:43.340586   28382 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:49:43.357729   28382 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:43.363278   28382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:43.510012   28382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:49:43.525689   28382 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:49:43.525721   28382 certs.go:194] generating shared ca certs ...
	I0916 10:49:43.525742   28382 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.525902   28382 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:49:43.525940   28382 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:49:43.525952   28382 certs.go:256] generating profile certs ...
	I0916 10:49:43.526054   28382 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:49:43.526107   28382 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471
	I0916 10:49:43.526130   28382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.127 192.168.39.254]
	I0916 10:49:43.615058   28382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 ...
	I0916 10:49:43.615087   28382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471: {Name:mkdc1b4f93c1d0cf9ed7c134427449b54c119ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.615252   28382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471 ...
	I0916 10:49:43.615262   28382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471: {Name:mk44f6b8e3053318a7781a0ded64dfd0c38e8870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.615328   28382 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:49:43.615496   28382 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:49:43.615629   28382 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:49:43.615643   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:49:43.615655   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:49:43.615668   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:43.615681   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:43.615693   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:43.615707   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:49:43.615722   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:43.615734   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:43.615788   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:49:43.615821   28382 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:43.615830   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:49:43.615855   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:49:43.615876   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:43.615897   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:43.615932   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:49:43.615961   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.615976   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.615988   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.616545   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:43.642550   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:49:43.666588   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:43.690999   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:43.715060   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:49:43.738836   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:43.762339   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:43.785649   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:49:43.809948   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:43.833383   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:49:43.856725   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:49:43.879989   28382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:43.897035   28382 ssh_runner.go:195] Run: openssl version
	I0916 10:49:43.902840   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:43.914400   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.919013   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.919075   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.925137   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:43.935417   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:49:43.946645   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.951098   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.951143   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.956794   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:43.966620   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:49:43.977946   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.982493   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.982550   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.988245   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:43.998642   28382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:44.002978   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:44.008612   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:44.014304   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:44.019867   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:44.025979   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:44.032073   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:44.037852   28382 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:44.037973   28382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:49:44.038017   28382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:49:44.076228   28382 cri.go:89] found id: "acb3a9815a7d7d96bd398b1d8222524d573639530c35a82d60c88262c7f2a589"
	I0916 10:49:44.076248   28382 cri.go:89] found id: "539537ea4f2684d0513678c23e52eda87a874c01787a81c1ca77e0451fdb5b36"
	I0916 10:49:44.076252   28382 cri.go:89] found id: "996c12a7b1565febe9557aad65d9754e33c44d4a64678026aef5b63f3d99f1e0"
	I0916 10:49:44.076255   28382 cri.go:89] found id: "034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3"
	I0916 10:49:44.076257   28382 cri.go:89] found id: "7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465"
	I0916 10:49:44.076260   28382 cri.go:89] found id: "b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99"
	I0916 10:49:44.076263   28382 cri.go:89] found id: "ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913"
	I0916 10:49:44.076265   28382 cri.go:89] found id: "6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf"
	I0916 10:49:44.076267   28382 cri.go:89] found id: "62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045"
	I0916 10:49:44.076272   28382 cri.go:89] found id: "a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb"
	I0916 10:49:44.076275   28382 cri.go:89] found id: "13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1"
	I0916 10:49:44.076289   28382 cri.go:89] found id: "308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3"
	I0916 10:49:44.076292   28382 cri.go:89] found id: "f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113"
	I0916 10:49:44.076295   28382 cri.go:89] found id: ""
	I0916 10:49:44.076334   28382 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.836201738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483963836180570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72dcc98a-3fa5-4c71-bb0c-fac26a290b36 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.836949599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0b3bfa0-a059-451b-b500-b19d93b9cfd3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.837065296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0b3bfa0-a059-451b-b500-b19d93b9cfd3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.838266912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0b3bfa0-a059-451b-b500-b19d93b9cfd3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.885081978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9189f334-af45-4546-a153-524adc402764 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.885161549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9189f334-af45-4546-a153-524adc402764 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.886949665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1be07b43-bb11-45ae-a32e-c9dbc42d48cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.887664685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483963887328514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1be07b43-bb11-45ae-a32e-c9dbc42d48cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.888141787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=943a21e4-0e8c-43e1-b1e0-be6eaaec61ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.888199327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=943a21e4-0e8c-43e1-b1e0-be6eaaec61ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.889119820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=943a21e4-0e8c-43e1-b1e0-be6eaaec61ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.932948961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b85550c-ed4a-431c-b386-0f6be1b2fd15 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.933070884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b85550c-ed4a-431c-b386-0f6be1b2fd15 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.934331571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f1cb272-4546-4291-806c-cfdb93f40fb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.935104689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483963935078042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f1cb272-4546-4291-806c-cfdb93f40fb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.935618986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9278e26e-a84b-4b4d-8117-89115497646a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.935721450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9278e26e-a84b-4b4d-8117-89115497646a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:43 ha-244475 crio[3700]: time="2024-09-16 10:52:43.936214503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9278e26e-a84b-4b4d-8117-89115497646a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.014623616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18e6fec8-bfe0-4c59-834e-2fdc3cd76408 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.014741854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18e6fec8-bfe0-4c59-834e-2fdc3cd76408 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.020321170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3a80feb-33f5-469a-be26-d623ba5471c0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.021086904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483964021059948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3a80feb-33f5-469a-be26-d623ba5471c0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.022605502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6432cbef-896c-4c5d-b2c9-e320b1302bf5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.022683871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6432cbef-896c-4c5d-b2c9-e320b1302bf5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:52:44 ha-244475 crio[3700]: time="2024-09-16 10:52:44.023149869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6432cbef-896c-4c5d-b2c9-e320b1302bf5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	392523616ed48       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   3                   0e78d323319d6       kube-controller-manager-ha-244475
	91491eba4d33b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       3                   3e70bdcf95953       storage-provisioner
	39bee169a2aff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            3                   35ef4979f7d50       kube-apiserver-ha-244475
	2f00a03475d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   1eaacb088bf94       busybox-7dff88458-d4m5s
	c7904b48af0d5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   2                   0e78d323319d6       kube-controller-manager-ha-244475
	eff3d4b6ef1bb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   6203d6a2f83f4       kube-vip-ha-244475
	ba907061155c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   8cacdb30939e8       coredns-7c65d6cfc9-lzrg2
	6dd41088c8229       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   9ec606e5b45f0       kindnet-7v2cl
	3a6f1aac71418       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   2305599c1317d       coredns-7c65d6cfc9-m8fd7
	1da90c534a1ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       2                   3e70bdcf95953       storage-provisioner
	268d2527b9c98       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   194b56870a94a       etcd-ha-244475
	2ef7bc6ba1708       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   c308ac1286c4c       kube-proxy-crttt
	c692c6a18e99d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   35ef4979f7d50       kube-apiserver-ha-244475
	6c0110ceab6a6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   bd9f73d3e8d55       kube-scheduler-ha-244475
	5c701fcd74aba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   ed1838f7506b4       busybox-7dff88458-d4m5s
	034030626ec02       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   159730a21bea6       coredns-7c65d6cfc9-m8fd7
	7f78c5e4a3a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   4d8c4f0a29bb7       coredns-7c65d6cfc9-lzrg2
	ac63170bf5bb3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   9c8ab7a98f749       kindnet-7v2cl
	6e6d69b26d5c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   3fbb7c8e9af71       kube-proxy-crttt
	a0223669288e2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   42a76bc40dc3e       kube-scheduler-ha-244475
	308650af833f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   693cfec22177d       etcd-ha-244475
	
	
	==> coredns [034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3] <==
	[INFO] 10.244.2.2:42931 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200783s
	[INFO] 10.244.0.4:33694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014309s
	[INFO] 10.244.0.4:35532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107639s
	[INFO] 10.244.0.4:53168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009525s
	[INFO] 10.244.0.4:50253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250965s
	[INFO] 10.244.0.4:40357 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089492s
	[INFO] 10.244.1.2:49152 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001985919s
	[INFO] 10.244.1.2:50396 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132748s
	[INFO] 10.244.2.2:38313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000951s
	[INFO] 10.244.0.4:43336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168268s
	[INFO] 10.244.0.4:44949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123895s
	[INFO] 10.244.0.4:52348 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107748s
	[INFO] 10.244.1.2:36649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286063s
	[INFO] 10.244.1.2:42747 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082265s
	[INFO] 10.244.2.2:45891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018425s
	[INFO] 10.244.2.2:53625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126302s
	[INFO] 10.244.2.2:44397 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109098s
	[INFO] 10.244.0.4:39956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013935s
	[INFO] 10.244.0.4:39139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008694s
	[INFO] 10.244.0.4:38933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060589s
	[INFO] 10.244.1.2:36849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146451s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465] <==
	[INFO] 10.244.2.2:52615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191836s
	[INFO] 10.244.2.2:49834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166519s
	[INFO] 10.244.2.2:39495 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127494s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001694487s
	[INFO] 10.244.0.4:36178 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091958s
	[INFO] 10.244.0.4:33247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160731s
	[INFO] 10.244.1.2:52512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150889s
	[INFO] 10.244.1.2:43450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182534s
	[INFO] 10.244.1.2:56403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150359s
	[INFO] 10.244.1.2:51246 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230547s
	[INFO] 10.244.1.2:39220 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090721s
	[INFO] 10.244.1.2:41766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155057s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153103s
	[INFO] 10.244.2.2:44469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099361s
	[INFO] 10.244.2.2:52465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086382s
	[INFO] 10.244.0.4:36474 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117775s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142151s
	[INFO] 10.244.1.2:39272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113629s
	[INFO] 10.244.2.2:43223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141566s
	[INFO] 10.244.0.4:36502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000282073s
	[INFO] 10.244.1.2:60302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207499s
	[INFO] 10.244.1.2:49950 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184993s
	[INFO] 10.244.1.2:54052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094371s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:39:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m11s                 kube-proxy       
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)     kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)     kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)     kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                   kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                   kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                   kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   NodeReady                13m                   kubelet          Node ha-244475 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Warning  ContainerGCFailed        3m52s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m15s (x3 over 4m4s)  kubelet          Node ha-244475 status is now: NodeNotReady
	  Normal   RegisteredNode           2m12s                 node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           108s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           39s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d493ff2b-8d16-4f12-976a-cc277283240e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 95s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             9m23s                  node-controller  Node ha-244475-m02 status is now: NodeNotReady
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           108s                   node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	
	
	Name:               ha-244475-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_41_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:52:17 +0000   Mon, 16 Sep 2024 10:51:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:52:17 +0000   Mon, 16 Sep 2024 10:51:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:52:17 +0000   Mon, 16 Sep 2024 10:51:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:52:17 +0000   Mon, 16 Sep 2024 10:51:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-244475-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d01912e060494092a8b6a2df64a0a30c
	  System UUID:                d01912e0-6049-4092-a8b6-a2df64a0a30c
	  Boot ID:                    afeeae58-74c1-4457-8732-be1f3382e3c5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7bhqg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-244475-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-rzwwj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-244475-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-244475-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-g5v5l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-244475-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-244475-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-244475-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal   RegisteredNode           2m12s              node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	  Normal   NodeNotReady             92s                node-controller  Node ha-244475-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 58s                kubelet          Node ha-244475-m03 has been rebooted, boot id: afeeae58-74c1-4457-8732-be1f3382e3c5
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)  kubelet          Node ha-244475-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)  kubelet          Node ha-244475-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                58s                kubelet          Node ha-244475-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-244475-m03 event: Registered Node ha-244475-m03 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dflt4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-kp7hv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m12s              node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeNotReady             92s                node-controller  Node ha-244475-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-244475-m04 has been rebooted, boot id: 17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-244475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.139824] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054792] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058211] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173707] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.144769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.277555] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.915448] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.568561] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067639] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.970048] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:49] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.157093] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.177936] systemd-fstab-generator[3650]: Ignoring "noauto" option for root device
	[  +0.142086] systemd-fstab-generator[3662]: Ignoring "noauto" option for root device
	[  +0.308892] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +5.722075] systemd-fstab-generator[3786]: Ignoring "noauto" option for root device
	[  +0.089630] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.518650] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 10:50] kauditd_printk_skb: 85 callbacks suppressed
	[  +6.619080] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.373360] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791] <==
	{"level":"warn","ts":"2024-09-16T10:51:41.636338Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:45.638162Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:45.638339Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:46.613580Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e16a89b9eb3a3bb1","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:46.613728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e16a89b9eb3a3bb1","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:49.315774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.307484ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3540834236917226590 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-244475\" mod_revision:2402 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-244475\" value_size:474 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-244475\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:51:49.316321Z","caller":"traceutil/trace.go:171","msg":"trace[2076708770] transaction","detail":"{read_only:false; response_revision:2456; number_of_response:1; }","duration":"159.801791ms","start":"2024-09-16T10:51:49.156481Z","end":"2024-09-16T10:51:49.316283Z","steps":["trace[2076708770] 'process raft request'  (duration: 36.236805ms)","trace[2076708770] 'compare'  (duration: 122.165617ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:51:49.641314Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:49.641447Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:51.614336Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e16a89b9eb3a3bb1","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:51.614434Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e16a89b9eb3a3bb1","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T10:51:53.341553Z","caller":"traceutil/trace.go:171","msg":"trace[342019392] transaction","detail":"{read_only:false; response_revision:2472; number_of_response:1; }","duration":"127.752922ms","start":"2024-09-16T10:51:53.213723Z","end":"2024-09-16T10:51:53.341476Z","steps":["trace[342019392] 'process raft request'  (duration: 127.602714ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:51:53.643843Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.127:2380/version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:53.643990Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e16a89b9eb3a3bb1","error":"Get \"https://192.168.39.127:2380/version\": dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:56.614542Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e16a89b9eb3a3bb1","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:56.614619Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e16a89b9eb3a3bb1","rtt":"0s","error":"dial tcp 192.168.39.127:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:51:56.892143Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"f0e3021c7d1d789a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"130.389066ms"}
	{"level":"warn","ts":"2024-09-16T10:51:56.892275Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e16a89b9eb3a3bb1","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"130.527225ms"}
	{"level":"info","ts":"2024-09-16T10:51:57.355255Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.355389Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.355480Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.374066Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:51:57.374564Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.374470Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:51:57.374795Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	
	
	==> etcd [308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3] <==
	2024/09/16 10:48:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/16 10:48:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T10:48:05.622322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:48:05.622416Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:48:05.622669Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"683e1d26ac7e3123","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T10:48:05.622908Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.622994Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623020Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623337Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623408Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623563Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623672Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623695Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623706Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623941Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624005Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624132Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624228Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.627878Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"warn","ts":"2024-09-16T10:48:05.627901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.872293306s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T10:48:05.627999Z","caller":"traceutil/trace.go:171","msg":"trace[183528881] range","detail":"{range_begin:; range_end:; }","duration":"8.872408831s","start":"2024-09-16T10:47:56.755582Z","end":"2024-09-16T10:48:05.627991Z","steps":["trace[183528881] 'agreement among raft nodes before linearized reading'  (duration: 8.872291909s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:48:05.628057Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2024-09-16T10:48:05.628086Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-244475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	{"level":"error","ts":"2024-09-16T10:48:05.628066Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 10:52:44 up 14 min,  0 users,  load average: 0.31, 0.52, 0.32
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f] <==
	I0916 10:52:12.115718       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:52:22.116349       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:52:22.116392       1 main.go:299] handling current node
	I0916 10:52:22.116427       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:52:22.116436       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:52:22.116653       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:52:22.116677       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:52:22.116782       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:52:22.116803       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:52:32.121274       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:52:32.121333       1 main.go:299] handling current node
	I0916 10:52:32.121354       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:52:32.121362       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:52:32.121703       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:52:32.121737       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:52:32.121815       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:52:32.121824       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:52:42.116655       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:52:42.116744       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:52:42.116965       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:52:42.117022       1 main.go:299] handling current node
	I0916 10:52:42.117039       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:52:42.117047       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:52:42.117151       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:52:42.117179       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913] <==
	I0916 10:47:29.301243       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:39.301433       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:39.301612       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:39.301782       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:39.301808       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:39.301866       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:39.301885       1 main.go:299] handling current node
	I0916 10:47:39.301906       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:39.301922       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:49.306310       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:49.306426       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:49.306666       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:49.306700       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:49.306797       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:49.306818       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:49.306872       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:49.306891       1 main.go:299] handling current node
	I0916 10:47:59.300973       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:59.301025       1 main.go:299] handling current node
	I0916 10:47:59.301052       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:59.301057       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:59.301226       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:59.301291       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:59.301343       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:59.301365       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d] <==
	I0916 10:50:32.786625       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:50:32.786726       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:50:32.870064       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:50:32.874061       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:50:32.874378       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:50:32.874471       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:50:32.878579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:50:32.878785       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:50:32.880011       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:50:32.880175       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:50:32.880401       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:50:32.880639       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:50:32.881359       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:50:32.881448       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:50:32.881854       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:50:32.883414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:50:32.885333       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:50:32.885366       1 policy_source.go:224] refreshing policies
	W0916 10:50:32.891110       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.222]
	I0916 10:50:32.892579       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:50:32.900075       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 10:50:32.909150       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 10:50:32.968716       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:50:33.778275       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:50:34.130805       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	
	
	==> kube-apiserver [c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6] <==
	I0916 10:49:51.303646       1 options.go:228] external host was not specified, using 192.168.39.19
	I0916 10:49:51.307873       1 server.go:142] Version: v1.31.1
	I0916 10:49:51.309597       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:51.809274       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 10:49:51.821629       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 10:49:51.821673       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 10:49:51.821966       1 instance.go:232] Using reconciler: lease
	I0916 10:49:51.822581       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0916 10:50:11.808376       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 10:50:11.808683       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 10:50:11.823458       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0916 10:50:11.823720       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af] <==
	I0916 10:51:11.436154       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="110.582376ms"
	I0916 10:51:11.436402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="112.888µs"
	I0916 10:51:12.328704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:51:12.329197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:12.362302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:12.364485       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:51:12.400805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.321932ms"
	I0916 10:51:12.401146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="155.042µs"
	I0916 10:51:16.986325       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:17.578067       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:17.665906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m02"
	I0916 10:51:27.064698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:51:46.077327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:46.113442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:46.967874       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:51:47.097040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.433µs"
	I0916 10:52:03.328305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.896482ms"
	I0916 10:52:03.328573       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="115.724µs"
	I0916 10:52:05.280761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:05.392448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:17.083217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:52:36.563117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:52:36.563752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:36.585332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:37.001827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	
	
	==> kube-controller-manager [c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19] <==
	I0916 10:50:23.787937       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:50:24.055825       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:50:24.055866       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:24.057300       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:50:24.057381       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:50:24.057621       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:50:24.057818       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:50:34.061987       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed] <==
	E0916 10:50:33.213866       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-244475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 10:50:33.214328       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0916 10:50:33.214618       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:50:33.254822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:50:33.254898       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:50:33.254936       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:50:33.257890       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:50:33.258306       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:50:33.258342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:33.259973       1 config.go:199] "Starting service config controller"
	I0916 10:50:33.260036       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:50:33.260076       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:50:33.260102       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:50:33.260908       1 config.go:328] "Starting node config controller"
	I0916 10:50:33.260937       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0916 10:50:36.287039       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0916 10:50:36.287174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.286395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.287411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.288233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0916 10:50:37.161128       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:50:37.161227       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:50:37.560852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf] <==
	E0916 10:46:51.645690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:51.645901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:51.645976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:51.646054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:51.646084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.174791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.175108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.174878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:07.389661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:07.390930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:07.391344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:07.391805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:10.463790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:10.464149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:25.822162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:25.822276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:31.966129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:31.966261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:38.109734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:38.109808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:59.614733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:59.615035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704] <==
	W0916 10:50:23.226443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:23.226750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:23.304343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:23.313719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:27.106134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.106286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:27.917399       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.917565       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.353853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.353900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.362689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.362727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.539820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.539945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.172233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.172367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.247772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.247816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:32.800369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:50:32.801683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:50:32.801914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:50:32.802040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:50:43.636980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb] <==
	E0916 10:38:50.992011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.039856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:38:51.039907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.293677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:38:51.293783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:38:53.269920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:27.446213       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5" pod="default/busybox-7dff88458-7bhqg" assumedNode="ha-244475-m03" currentNode="ha-244475-m02"
	E0916 10:41:27.456948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m02"
	E0916 10:41:27.457071       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5(default/busybox-7dff88458-7bhqg) was assumed on ha-244475-m02 but assigned to ha-244475-m03" pod="default/busybox-7dff88458-7bhqg"
	E0916 10:41:27.457108       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" pod="default/busybox-7dff88458-7bhqg"
	I0916 10:41:27.457173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m03"
	E0916 10:47:54.234292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 10:47:55.101205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0916 10:47:55.243248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0916 10:47:56.250917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 10:47:56.495628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 10:47:57.140623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0916 10:47:57.973671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0916 10:47:58.028997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0916 10:48:01.831431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0916 10:48:02.285792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 10:48:02.396636       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0916 10:48:02.676356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 10:48:02.796464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0916 10:48:05.532040       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:51:22 ha-244475 kubelet[1309]: E0916 10:51:22.784454    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483882783813890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:26 ha-244475 kubelet[1309]: I0916 10:51:26.581478    1309 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-244475" podUID="94b4d383-a0e8-4686-b108-923c0235f371"
	Sep 16 10:51:26 ha-244475 kubelet[1309]: I0916 10:51:26.606575    1309 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-244475"
	Sep 16 10:51:27 ha-244475 kubelet[1309]: I0916 10:51:27.416702    1309 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-244475" podUID="94b4d383-a0e8-4686-b108-923c0235f371"
	Sep 16 10:51:32 ha-244475 kubelet[1309]: E0916 10:51:32.786068    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483892785787724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:32 ha-244475 kubelet[1309]: E0916 10:51:32.786157    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483892785787724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:42 ha-244475 kubelet[1309]: E0916 10:51:42.790791    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483902787962251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:42 ha-244475 kubelet[1309]: E0916 10:51:42.790828    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483902787962251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:52 ha-244475 kubelet[1309]: E0916 10:51:52.622857    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:51:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:51:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:51:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:51:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:51:52 ha-244475 kubelet[1309]: E0916 10:51:52.794964    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483912792713573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:52 ha-244475 kubelet[1309]: E0916 10:51:52.795001    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483912792713573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:02 ha-244475 kubelet[1309]: E0916 10:52:02.798106    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483922797600676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:02 ha-244475 kubelet[1309]: E0916 10:52:02.798162    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483922797600676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:12 ha-244475 kubelet[1309]: E0916 10:52:12.801213    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483932800328990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:12 ha-244475 kubelet[1309]: E0916 10:52:12.801259    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483932800328990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:22 ha-244475 kubelet[1309]: E0916 10:52:22.803704    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483942803252630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:22 ha-244475 kubelet[1309]: E0916 10:52:22.804115    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483942803252630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:32 ha-244475 kubelet[1309]: E0916 10:52:32.807821    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483952807181003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:32 ha-244475 kubelet[1309]: E0916 10:52:32.807869    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483952807181003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:42 ha-244475 kubelet[1309]: E0916 10:52:42.814819    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483962809980508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:42 ha-244475 kubelet[1309]: E0916 10:52:42.814886    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483962809980508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:52:43.534034   29846 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (437.554µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (402.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 node delete m03 -v=7 --alsologtostderr: (16.05600347s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:511: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (499.831µs)
ha_test.go:513: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.765623371s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m04 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp testdata/cp-test.txt                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m03 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-244475 node stop m02 -v=7                                                     | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-244475 node start m02 -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475 -v=7                                                           | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-244475 -v=7                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-244475 --wait=true -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	| node    | ha-244475 node delete m03 -v=7                                                   | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC | 16 Sep 24 10:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:48:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:48:04.629611   28382 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:48:04.629751   28382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:48:04.629762   28382 out.go:358] Setting ErrFile to fd 2...
	I0916 10:48:04.629769   28382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:48:04.629972   28382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:48:04.630523   28382 out.go:352] Setting JSON to false
	I0916 10:48:04.631433   28382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1835,"bootTime":1726481850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:48:04.631527   28382 start.go:139] virtualization: kvm guest
	I0916 10:48:04.633814   28382 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:48:04.635027   28382 notify.go:220] Checking for updates...
	I0916 10:48:04.635032   28382 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:48:04.636319   28382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:48:04.637618   28382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:48:04.638937   28382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:48:04.640222   28382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:48:04.641463   28382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:48:04.643097   28382 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:48:04.643194   28382 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:48:04.643664   28382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:48:04.643720   28382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:48:04.660057   28382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0916 10:48:04.660593   28382 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:48:04.661160   28382 main.go:141] libmachine: Using API Version  1
	I0916 10:48:04.661198   28382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:48:04.661616   28382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:48:04.661813   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.697772   28382 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:48:04.699530   28382 start.go:297] selected driver: kvm2
	I0916 10:48:04.699547   28382 start.go:901] validating driver "kvm2" against &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:48:04.699689   28382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:48:04.700019   28382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:04.700102   28382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:48:04.715527   28382 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:48:04.716227   28382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:48:04.716263   28382 cni.go:84] Creating CNI manager for ""
	I0916 10:48:04.716312   28382 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:48:04.716367   28382 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:48:04.716493   28382 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:04.718937   28382 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:48:04.720335   28382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:48:04.720368   28382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:48:04.720379   28382 cache.go:56] Caching tarball of preloaded images
	I0916 10:48:04.720467   28382 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:48:04.720479   28382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:48:04.720587   28382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:48:04.720801   28382 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:48:04.720863   28382 start.go:364] duration metric: took 41.906µs to acquireMachinesLock for "ha-244475"
	I0916 10:48:04.720882   28382 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:48:04.720887   28382 fix.go:54] fixHost starting: 
	I0916 10:48:04.721282   28382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:48:04.721314   28382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:48:04.735751   28382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0916 10:48:04.736248   28382 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:48:04.736739   28382 main.go:141] libmachine: Using API Version  1
	I0916 10:48:04.736771   28382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:48:04.737094   28382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:48:04.737279   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.737431   28382 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:48:04.738886   28382 fix.go:112] recreateIfNeeded on ha-244475: state=Running err=<nil>
	W0916 10:48:04.738909   28382 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:48:04.740961   28382 out.go:177] * Updating the running kvm2 "ha-244475" VM ...
	I0916 10:48:04.742320   28382 machine.go:93] provisionDockerMachine start ...
	I0916 10:48:04.742348   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.742548   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:04.744733   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.745067   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:04.745093   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.745218   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:04.745382   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.745523   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.745653   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:04.745797   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:04.745999   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:04.746012   28382 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:48:04.866195   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:48:04.866247   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:04.866489   28382 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:48:04.866520   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:04.866739   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:04.869344   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.869776   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:04.869798   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.869969   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:04.870127   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.870289   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.870419   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:04.870579   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:04.870744   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:04.870756   28382 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:48:05.005091   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:48:05.005118   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.007741   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.008168   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.008192   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.008399   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.008580   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.008720   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.008818   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.008958   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:05.009165   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:05.009182   28382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:48:05.126206   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:48:05.126232   28382 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:48:05.126289   28382 buildroot.go:174] setting up certificates
	I0916 10:48:05.126297   28382 provision.go:84] configureAuth start
	I0916 10:48:05.126306   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:05.126557   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:48:05.128973   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.129406   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.129434   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.129547   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.131762   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.132175   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.132198   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.132394   28382 provision.go:143] copyHostCerts
	I0916 10:48:05.132459   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:48:05.132520   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:48:05.132531   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:48:05.132608   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:48:05.132692   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:48:05.132709   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:48:05.132716   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:48:05.132739   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:48:05.132778   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:48:05.132795   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:48:05.132803   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:48:05.132824   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:48:05.132867   28382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:48:05.230030   28382 provision.go:177] copyRemoteCerts
	I0916 10:48:05.230090   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:48:05.230124   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.232727   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.232996   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.233021   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.233228   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.233411   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.233854   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.233994   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:48:05.321368   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:48:05.321442   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:48:05.348483   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:48:05.348579   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:48:05.376610   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:48:05.376680   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:48:05.404845   28382 provision.go:87] duration metric: took 278.532484ms to configureAuth
	I0916 10:48:05.404874   28382 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:48:05.405088   28382 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:48:05.405170   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.407821   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.408170   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.408200   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.408395   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.408568   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.408725   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.408860   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.409024   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:05.409256   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:05.409278   28382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:49:36.136821   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:49:36.136864   28382 machine.go:96] duration metric: took 1m31.394528146s to provisionDockerMachine
	I0916 10:49:36.136875   28382 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:49:36.136885   28382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:36.136901   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.137195   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:36.137226   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.140151   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.140600   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.140633   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.140776   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.140974   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.141162   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.141297   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.229105   28382 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:49:36.233446   28382 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:49:36.233468   28382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:49:36.233521   28382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:49:36.233595   28382 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:49:36.233605   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:49:36.233712   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:49:36.243379   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:49:36.268390   28382 start.go:296] duration metric: took 131.49973ms for postStartSetup
	I0916 10:49:36.268431   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.268704   28382 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 10:49:36.268740   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.271523   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.272009   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.272032   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.272177   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.272383   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.272533   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.272679   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	W0916 10:49:36.359589   28382 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 10:49:36.359614   28382 fix.go:56] duration metric: took 1m31.638727744s for fixHost
	I0916 10:49:36.359635   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.362024   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.362345   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.362379   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.362437   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.362603   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.362772   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.362934   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.363065   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:36.363232   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:49:36.363242   28382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:49:36.478148   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483776.445441321
	
	I0916 10:49:36.478178   28382 fix.go:216] guest clock: 1726483776.445441321
	I0916 10:49:36.478185   28382 fix.go:229] Guest: 2024-09-16 10:49:36.445441321 +0000 UTC Remote: 2024-09-16 10:49:36.359621457 +0000 UTC m=+91.765044121 (delta=85.819864ms)
	I0916 10:49:36.478209   28382 fix.go:200] guest clock delta is within tolerance: 85.819864ms
	I0916 10:49:36.478215   28382 start.go:83] releasing machines lock for "ha-244475", held for 1m31.757340687s
	I0916 10:49:36.478246   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.478464   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:49:36.480946   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.481304   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.481330   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.481512   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.481984   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.482250   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.482367   28382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:49:36.482411   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.482451   28382 ssh_runner.go:195] Run: cat /version.json
	I0916 10:49:36.482475   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.485017   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485084   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485349   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.485372   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485438   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.485457   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485482   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.485617   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.485706   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.485783   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.485830   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.485895   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.485941   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.486045   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.566130   28382 ssh_runner.go:195] Run: systemctl --version
	I0916 10:49:36.595210   28382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:49:36.759288   28382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:49:36.765378   28382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:36.765456   28382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:36.775556   28382 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:36.775578   28382 start.go:495] detecting cgroup driver to use...
	I0916 10:49:36.775647   28382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:49:36.791549   28382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:49:36.805408   28382 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:49:36.805456   28382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:49:36.819777   28382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:49:36.834041   28382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:49:37.006927   28382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:49:37.154158   28382 docker.go:233] disabling docker service ...
	I0916 10:49:37.154233   28382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:49:37.172237   28382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:49:37.187140   28382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:49:37.335249   28382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:49:37.485651   28382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:49:37.500949   28382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:37.520699   28382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:49:37.520778   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.532711   28382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:49:37.532779   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.545325   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.557100   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.568745   28382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:37.580983   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.592790   28382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.604166   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.615655   28382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:37.625740   28382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:37.636174   28382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:37.785177   28382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:49:42.995342   28382 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.210133239s)
	I0916 10:49:42.995373   28382 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:49:42.995414   28382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:49:43.001465   28382 start.go:563] Will wait 60s for crictl version
	I0916 10:49:43.001535   28382 ssh_runner.go:195] Run: which crictl
	I0916 10:49:43.005982   28382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:49:43.050539   28382 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:49:43.050628   28382 ssh_runner.go:195] Run: crio --version
	I0916 10:49:43.079811   28382 ssh_runner.go:195] Run: crio --version
	I0916 10:49:43.111377   28382 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:49:43.112594   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:49:43.115110   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:43.115409   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:43.115437   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:43.115643   28382 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:43.120664   28382 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:43.120799   28382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:49:43.120843   28382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:43.174107   28382 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:49:43.174132   28382 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:49:43.174191   28382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:43.209963   28382 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:49:43.209985   28382 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:49:43.209995   28382 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:49:43.210109   28382 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:49:43.210169   28382 ssh_runner.go:195] Run: crio config
	I0916 10:49:43.257466   28382 cni.go:84] Creating CNI manager for ""
	I0916 10:49:43.257492   28382 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:49:43.257503   28382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:43.257526   28382 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:43.257697   28382 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:43.257719   28382 kube-vip.go:115] generating kube-vip config ...
	I0916 10:49:43.257765   28382 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:49:43.269960   28382 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:49:43.270094   28382 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:49:43.270162   28382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:43.280474   28382 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:43.280563   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:49:43.290395   28382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:49:43.307234   28382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:43.324085   28382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:49:43.340586   28382 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:49:43.357729   28382 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:43.363278   28382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:43.510012   28382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:49:43.525689   28382 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:49:43.525721   28382 certs.go:194] generating shared ca certs ...
	I0916 10:49:43.525742   28382 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.525902   28382 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:49:43.525940   28382 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:49:43.525952   28382 certs.go:256] generating profile certs ...
	I0916 10:49:43.526054   28382 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:49:43.526107   28382 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471
	I0916 10:49:43.526130   28382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.127 192.168.39.254]
	I0916 10:49:43.615058   28382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 ...
	I0916 10:49:43.615087   28382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471: {Name:mkdc1b4f93c1d0cf9ed7c134427449b54c119ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.615252   28382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471 ...
	I0916 10:49:43.615262   28382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471: {Name:mk44f6b8e3053318a7781a0ded64dfd0c38e8870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.615328   28382 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:49:43.615496   28382 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:49:43.615629   28382 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:49:43.615643   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:49:43.615655   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:49:43.615668   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:43.615681   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:43.615693   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:43.615707   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:49:43.615722   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:43.615734   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:43.615788   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:49:43.615821   28382 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:43.615830   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:49:43.615855   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:49:43.615876   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:43.615897   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:43.615932   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:49:43.615961   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.615976   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.615988   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.616545   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:43.642550   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:49:43.666588   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:43.690999   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:43.715060   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:49:43.738836   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:43.762339   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:43.785649   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:49:43.809948   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:43.833383   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:49:43.856725   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:49:43.879989   28382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:43.897035   28382 ssh_runner.go:195] Run: openssl version
	I0916 10:49:43.902840   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:43.914400   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.919013   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.919075   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.925137   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:43.935417   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:49:43.946645   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.951098   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.951143   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.956794   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:43.966620   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:49:43.977946   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.982493   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.982550   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.988245   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:43.998642   28382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:44.002978   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:44.008612   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:44.014304   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:44.019867   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:44.025979   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:44.032073   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:44.037852   28382 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:44.037973   28382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:49:44.038017   28382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:49:44.076228   28382 cri.go:89] found id: "acb3a9815a7d7d96bd398b1d8222524d573639530c35a82d60c88262c7f2a589"
	I0916 10:49:44.076248   28382 cri.go:89] found id: "539537ea4f2684d0513678c23e52eda87a874c01787a81c1ca77e0451fdb5b36"
	I0916 10:49:44.076252   28382 cri.go:89] found id: "996c12a7b1565febe9557aad65d9754e33c44d4a64678026aef5b63f3d99f1e0"
	I0916 10:49:44.076255   28382 cri.go:89] found id: "034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3"
	I0916 10:49:44.076257   28382 cri.go:89] found id: "7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465"
	I0916 10:49:44.076260   28382 cri.go:89] found id: "b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99"
	I0916 10:49:44.076263   28382 cri.go:89] found id: "ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913"
	I0916 10:49:44.076265   28382 cri.go:89] found id: "6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf"
	I0916 10:49:44.076267   28382 cri.go:89] found id: "62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045"
	I0916 10:49:44.076272   28382 cri.go:89] found id: "a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb"
	I0916 10:49:44.076275   28382 cri.go:89] found id: "13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1"
	I0916 10:49:44.076289   28382 cri.go:89] found id: "308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3"
	I0916 10:49:44.076292   28382 cri.go:89] found id: "f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113"
	I0916 10:49:44.076295   28382 cri.go:89] found id: ""
	I0916 10:49:44.076334   28382 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.931277103Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-d4m5s,Uid:6c479ead-4e77-41ca-9e2e-5cd7dc781761,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483823696861202,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:41:27.480703141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-244475,Uid:7d14d8f4abb76f867ab3a64246ef25cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726483805575234285,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{kubernetes.io/config.hash: 7d14d8f4abb76f867ab3a64246ef25cb,kubernetes.io/config.seen: 2024-09-16T10:49:43.326419700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8fd7,Uid:fc549709-ddc0-4684-b377-46d33ef8f03d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483790154463725,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-16T10:39:09.487465959Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&PodSandboxMetadata{Name:kindnet-7v2cl,Uid:764ade4d-cbcd-42b8-9d68-b4ed502de9eb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483790056547354,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:38:57.245484492Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&PodSandboxMetadata{Name:kube-proxy-crttt,Uid:0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:172648379005
1933908,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:38:57.241580111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-244475,Uid:dcc439ebdfb1c8eb0ac4d211479d24ca,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483790050296536,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertis
e-address.endpoint: 192.168.39.19:8443,kubernetes.io/config.hash: dcc439ebdfb1c8eb0ac4d211479d24ca,kubernetes.io/config.seen: 2024-09-16T10:38:52.514061824Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lzrg2,Uid:51962d07-f38a-4db3-86ee-af3d954dbec6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483790041061155,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:39:09.496113367Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-244475,Uid:caad45
7f3675fcf5fa9c2e121ebd3a2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483790009603716,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: caad457f3675fcf5fa9c2e121ebd3a2a,kubernetes.io/config.seen: 2024-09-16T10:38:52.514064274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&PodSandboxMetadata{Name:etcd-ha-244475,Uid:520edd0e46592c17928a302783a221a2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483790006567506,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221
a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.19:2379,kubernetes.io/config.hash: 520edd0e46592c17928a302783a221a2,kubernetes.io/config.seen: 2024-09-16T10:38:52.514057254Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-244475,Uid:0485b752bb66b84c639fb8d5b648be4a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483789991063099,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0485b752bb66b84c639fb8d5b648be4a,kubernetes.io/config.seen: 2024-09-16T10:38:52.514063070Z,kubernetes.io/config.source: fil
e,},RuntimeHandler:,},&PodSandbox{Id:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2e1264f7-2197-4821-8238-82fac849b145,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726483789977229270,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imageP
ullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T10:39:09.499691578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-d4m5s,Uid:6c479ead-4e77-41ca-9e2e-5cd7dc781761,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483287795954118,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:41:27.480703141Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lzrg2,Uid:51962d07-f38a-4db3-86ee-af3d954dbec6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483151316303996,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:39:09.496113367Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8fd7,Uid:fc549709-ddc0-4684-b377-46d33ef8f03d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483151294383580,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:39:09.487465959Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&PodSandboxMetadata{Name:kindnet-7v2cl,Uid:764ade4d-cbcd-42b8-9d68-b4ed502de9eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483137572983795,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:38:57.245484492Z,kubernetes.io/config.source: api,},Runtim
eHandler:,},&PodSandbox{Id:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&PodSandboxMetadata{Name:kube-proxy-crttt,Uid:0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483137567226613,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:38:57.241580111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-244475,Uid:caad457f3675fcf5fa9c2e121ebd3a2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483126178209423,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: caad457f3675fcf5fa9c2e121ebd3a2a,kubernetes.io/config.seen: 2024-09-16T10:38:45.656823440Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&PodSandboxMetadata{Name:etcd-ha-244475,Uid:520edd0e46592c17928a302783a221a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726483126127965428,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.19:2379,kubernetes.io/config.hash: 520edd0e465
92c17928a302783a221a2,kubernetes.io/config.seen: 2024-09-16T10:38:45.656818037Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e97de8aa-8d63-4f5a-aa75-2ced4691b76a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.933048569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a6226c0-f074-4b18-9a45-5027d2cabe8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.933132163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a6226c0-f074-4b18-9a45-5027d2cabe8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.934972756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a6226c0-f074-4b18-9a45-5027d2cabe8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.957715739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78c94f47-09dc-4bee-90ff-ff2569e82c96 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.958188280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78c94f47-09dc-4bee-90ff-ff2569e82c96 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.959670608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1fe7262-516e-419b-91e1-db6e1ff02526 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.960235680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483982960207854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1fe7262-516e-419b-91e1-db6e1ff02526 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.961112584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=722fbc12-1fd2-4f2f-b067-6e5a1702edab name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.961209024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=722fbc12-1fd2-4f2f-b067-6e5a1702edab name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:02 ha-244475 crio[3700]: time="2024-09-16 10:53:02.961881434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=722fbc12-1fd2-4f2f-b067-6e5a1702edab name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.013273648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9638e3cf-2433-42ba-9887-4fda8e7af766 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.013348133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9638e3cf-2433-42ba-9887-4fda8e7af766 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.020864555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cc5d188-aa70-4de2-b3f5-2ca1aa4cd54e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.021327606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483983021305735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cc5d188-aa70-4de2-b3f5-2ca1aa4cd54e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.022042002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddeafe60-b21d-412c-8015-f5d558cf32f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.022102738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddeafe60-b21d-412c-8015-f5d558cf32f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.022634796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ddeafe60-b21d-412c-8015-f5d558cf32f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.065097769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ee23a7d-1988-40c6-baf1-54f44a7410b5 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.065191245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ee23a7d-1988-40c6-baf1-54f44a7410b5 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.066588276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd745dca-1d6d-4c0e-bc03-a1b16f03ccd8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.067030369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483983067007186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd745dca-1d6d-4c0e-bc03-a1b16f03ccd8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.067675027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c490277-835b-4b0f-955c-8a6d4db3834c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.067752958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c490277-835b-4b0f-955c-8a6d4db3834c name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:53:03 ha-244475 crio[3700]: time="2024-09-16 10:53:03.068150886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c490277-835b-4b0f-955c-8a6d4db3834c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	392523616ed48       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago       Running             kube-controller-manager   3                   0e78d323319d6       kube-controller-manager-ha-244475
	91491eba4d33b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       3                   3e70bdcf95953       storage-provisioner
	39bee169a2aff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago       Running             kube-apiserver            3                   35ef4979f7d50       kube-apiserver-ha-244475
	2f00a03475d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago       Running             busybox                   1                   1eaacb088bf94       busybox-7dff88458-d4m5s
	c7904b48af0d5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago       Exited              kube-controller-manager   2                   0e78d323319d6       kube-controller-manager-ha-244475
	eff3d4b6ef1bb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago       Running             kube-vip                  0                   6203d6a2f83f4       kube-vip-ha-244475
	ba907061155c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   1                   8cacdb30939e8       coredns-7c65d6cfc9-lzrg2
	6dd41088c8229       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               1                   9ec606e5b45f0       kindnet-7v2cl
	3a6f1aac71418       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   1                   2305599c1317d       coredns-7c65d6cfc9-m8fd7
	1da90c534a1ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       2                   3e70bdcf95953       storage-provisioner
	268d2527b9c98       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago       Running             etcd                      1                   194b56870a94a       etcd-ha-244475
	2ef7bc6ba1708       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago       Running             kube-proxy                1                   c308ac1286c4c       kube-proxy-crttt
	c692c6a18e99d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago       Exited              kube-apiserver            2                   35ef4979f7d50       kube-apiserver-ha-244475
	6c0110ceab6a6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago       Running             kube-scheduler            1                   bd9f73d3e8d55       kube-scheduler-ha-244475
	5c701fcd74aba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   ed1838f7506b4       busybox-7dff88458-d4m5s
	034030626ec02       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Exited              coredns                   0                   159730a21bea6       coredns-7c65d6cfc9-m8fd7
	7f78c5e4a3a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Exited              coredns                   0                   4d8c4f0a29bb7       coredns-7c65d6cfc9-lzrg2
	ac63170bf5bb3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      14 minutes ago      Exited              kindnet-cni               0                   9c8ab7a98f749       kindnet-7v2cl
	6e6d69b26d5c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      14 minutes ago      Exited              kube-proxy                0                   3fbb7c8e9af71       kube-proxy-crttt
	a0223669288e2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      14 minutes ago      Exited              kube-scheduler            0                   42a76bc40dc3e       kube-scheduler-ha-244475
	308650af833f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago      Exited              etcd                      0                   693cfec22177d       etcd-ha-244475
	
	
	==> coredns [034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3] <==
	[INFO] 10.244.2.2:42931 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200783s
	[INFO] 10.244.0.4:33694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014309s
	[INFO] 10.244.0.4:35532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107639s
	[INFO] 10.244.0.4:53168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009525s
	[INFO] 10.244.0.4:50253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250965s
	[INFO] 10.244.0.4:40357 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089492s
	[INFO] 10.244.1.2:49152 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001985919s
	[INFO] 10.244.1.2:50396 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132748s
	[INFO] 10.244.2.2:38313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000951s
	[INFO] 10.244.0.4:43336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168268s
	[INFO] 10.244.0.4:44949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123895s
	[INFO] 10.244.0.4:52348 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107748s
	[INFO] 10.244.1.2:36649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286063s
	[INFO] 10.244.1.2:42747 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082265s
	[INFO] 10.244.2.2:45891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018425s
	[INFO] 10.244.2.2:53625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126302s
	[INFO] 10.244.2.2:44397 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109098s
	[INFO] 10.244.0.4:39956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013935s
	[INFO] 10.244.0.4:39139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008694s
	[INFO] 10.244.0.4:38933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060589s
	[INFO] 10.244.1.2:36849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146451s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465] <==
	[INFO] 10.244.2.2:52615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191836s
	[INFO] 10.244.2.2:49834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166519s
	[INFO] 10.244.2.2:39495 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127494s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001694487s
	[INFO] 10.244.0.4:36178 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091958s
	[INFO] 10.244.0.4:33247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160731s
	[INFO] 10.244.1.2:52512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150889s
	[INFO] 10.244.1.2:43450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182534s
	[INFO] 10.244.1.2:56403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150359s
	[INFO] 10.244.1.2:51246 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230547s
	[INFO] 10.244.1.2:39220 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090721s
	[INFO] 10.244.1.2:41766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155057s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153103s
	[INFO] 10.244.2.2:44469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099361s
	[INFO] 10.244.2.2:52465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086382s
	[INFO] 10.244.0.4:36474 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117775s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142151s
	[INFO] 10.244.1.2:39272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113629s
	[INFO] 10.244.2.2:43223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141566s
	[INFO] 10.244.0.4:36502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000282073s
	[INFO] 10.244.1.2:60302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207499s
	[INFO] 10.244.1.2:49950 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184993s
	[INFO] 10.244.1.2:54052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094371s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:43 +0000   Mon, 16 Sep 2024 10:39:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m30s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           14m                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-244475 status is now: NodeReady
	  Normal   RegisteredNode           13m                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Warning  ContainerGCFailed        4m11s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m34s (x3 over 4m23s)  kubelet          Node ha-244475 status is now: NodeNotReady
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           58s                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d493ff2b-8d16-4f12-976a-cc277283240e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             9m42s                  node-controller  Node ha-244475-m02 status is now: NodeNotReady
	  Normal  Starting                 2m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x7 over 2m55s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m31s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           58s                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:52:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2v2jd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kindnet-dflt4              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-kp7hv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           2m7s               node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeNotReady             111s               node-controller  Node ha-244475-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           58s                node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   Starting                 27s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 27s                kubelet          Node ha-244475-m04 has been rebooted, boot id: 17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Normal   NodeHasSufficientMemory  27s (x2 over 27s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27s (x2 over 27s)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27s (x2 over 27s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                27s                kubelet          Node ha-244475-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.139824] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054792] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058211] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173707] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.144769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.277555] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.915448] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.568561] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067639] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.970048] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:49] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.157093] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.177936] systemd-fstab-generator[3650]: Ignoring "noauto" option for root device
	[  +0.142086] systemd-fstab-generator[3662]: Ignoring "noauto" option for root device
	[  +0.308892] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +5.722075] systemd-fstab-generator[3786]: Ignoring "noauto" option for root device
	[  +0.089630] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.518650] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 10:50] kauditd_printk_skb: 85 callbacks suppressed
	[  +6.619080] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.373360] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791] <==
	{"level":"info","ts":"2024-09-16T10:51:57.355389Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.355480Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.374066Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:51:57.374564Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.374470Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:51:57.374795Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.569761Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.127:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-16T10:52:49.591821Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.127:33328","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-16T10:52:49.604460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035 17357719710197446810)"}
	{"level":"info","ts":"2024-09-16T10:52:49.606526Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","removed-remote-peer-id":"e16a89b9eb3a3bb1","removed-remote-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-09-16T10:52:49.606658Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.606988Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.607057Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.607462Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.607621Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.607729Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.608064Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T10:52:49.608120Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e16a89b9eb3a3bb1","error":"failed to read e16a89b9eb3a3bb1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T10:52:49.608150Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.608291Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-09-16T10:52:49.608352Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.608369Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.608382Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"683e1d26ac7e3123","removed-remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.620732Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"683e1d26ac7e3123","remote-peer-id-stream-handler":"683e1d26ac7e3123","remote-peer-id-from":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.629988Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"683e1d26ac7e3123","remote-peer-id-stream-handler":"683e1d26ac7e3123","remote-peer-id-from":"e16a89b9eb3a3bb1"}
	
	
	==> etcd [308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3] <==
	2024/09/16 10:48:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/16 10:48:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T10:48:05.622322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:48:05.622416Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:48:05.622669Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"683e1d26ac7e3123","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T10:48:05.622908Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.622994Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623020Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623337Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623408Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623563Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623672Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623695Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623706Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623941Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624005Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624132Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624228Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.627878Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"warn","ts":"2024-09-16T10:48:05.627901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.872293306s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T10:48:05.627999Z","caller":"traceutil/trace.go:171","msg":"trace[183528881] range","detail":"{range_begin:; range_end:; }","duration":"8.872408831s","start":"2024-09-16T10:47:56.755582Z","end":"2024-09-16T10:48:05.627991Z","steps":["trace[183528881] 'agreement among raft nodes before linearized reading'  (duration: 8.872291909s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:48:05.628057Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2024-09-16T10:48:05.628086Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-244475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	{"level":"error","ts":"2024-09-16T10:48:05.628066Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 10:53:03 up 14 min,  0 users,  load average: 0.28, 0.50, 0.32
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f] <==
	I0916 10:52:32.121737       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:52:32.121815       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:52:32.121824       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:52:42.116655       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:52:42.116744       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:52:42.116965       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:52:42.117022       1 main.go:299] handling current node
	I0916 10:52:42.117039       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:52:42.117047       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:52:42.117151       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:52:42.117179       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:52:52.112325       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:52:52.112597       1 main.go:299] handling current node
	I0916 10:52:52.112659       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:52:52.112680       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:52:52.112898       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:52:52.112938       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:52:52.113044       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:52:52.113069       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:53:02.113325       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:53:02.113467       1 main.go:299] handling current node
	I0916 10:53:02.113568       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:53:02.113613       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:53:02.113798       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:53:02.113836       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913] <==
	I0916 10:47:29.301243       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:39.301433       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:39.301612       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:39.301782       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:39.301808       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:39.301866       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:39.301885       1 main.go:299] handling current node
	I0916 10:47:39.301906       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:39.301922       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:49.306310       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:49.306426       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:49.306666       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:49.306700       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:49.306797       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:49.306818       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:49.306872       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:49.306891       1 main.go:299] handling current node
	I0916 10:47:59.300973       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:59.301025       1 main.go:299] handling current node
	I0916 10:47:59.301052       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:59.301057       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:59.301226       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:59.301291       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:59.301343       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:59.301365       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d] <==
	I0916 10:50:32.786625       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:50:32.786726       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:50:32.870064       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:50:32.874061       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:50:32.874378       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:50:32.874471       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:50:32.878579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:50:32.878785       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:50:32.880011       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:50:32.880175       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:50:32.880401       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:50:32.880639       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:50:32.881359       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:50:32.881448       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:50:32.881854       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:50:32.883414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:50:32.885333       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:50:32.885366       1 policy_source.go:224] refreshing policies
	W0916 10:50:32.891110       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.222]
	I0916 10:50:32.892579       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:50:32.900075       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 10:50:32.909150       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 10:50:32.968716       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:50:33.778275       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:50:34.130805       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	
	
	==> kube-apiserver [c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6] <==
	I0916 10:49:51.303646       1 options.go:228] external host was not specified, using 192.168.39.19
	I0916 10:49:51.307873       1 server.go:142] Version: v1.31.1
	I0916 10:49:51.309597       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:51.809274       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 10:49:51.821629       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 10:49:51.821673       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 10:49:51.821966       1 instance.go:232] Using reconciler: lease
	I0916 10:49:51.822581       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0916 10:50:11.808376       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 10:50:11.808683       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 10:50:11.823458       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0916 10:50:11.823720       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af] <==
	I0916 10:52:36.563752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:36.585332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:37.001827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:52:46.201760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:52:46.223609       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:52:46.344689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="98.19157ms"
	I0916 10:52:46.431117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="86.35094ms"
	I0916 10:52:46.449427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.252945ms"
	I0916 10:52:46.449972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.649µs"
	I0916 10:52:46.526897       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.981835ms"
	I0916 10:52:46.527434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="247.112µs"
	I0916 10:52:46.549044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.428µs"
	I0916 10:52:47.985974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="149.387µs"
	I0916 10:52:48.003289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="406.695µs"
	I0916 10:52:48.018280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.503µs"
	I0916 10:52:48.025970       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.049µs"
	I0916 10:52:48.027277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="118.41µs"
	I0916 10:52:48.048100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.199µs"
	I0916 10:52:48.378350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.652µs"
	I0916 10:52:48.567000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.687µs"
	I0916 10:52:48.571266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="163.058µs"
	I0916 10:52:50.007313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.371891ms"
	I0916 10:52:50.007657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.188µs"
	I0916 10:53:00.718414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m03"
	I0916 10:53:00.719449       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	
	
	==> kube-controller-manager [c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19] <==
	I0916 10:50:23.787937       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:50:24.055825       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:50:24.055866       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:24.057300       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:50:24.057381       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:50:24.057621       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:50:24.057818       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:50:34.061987       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed] <==
	E0916 10:50:33.213866       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-244475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 10:50:33.214328       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0916 10:50:33.214618       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:50:33.254822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:50:33.254898       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:50:33.254936       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:50:33.257890       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:50:33.258306       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:50:33.258342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:33.259973       1 config.go:199] "Starting service config controller"
	I0916 10:50:33.260036       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:50:33.260076       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:50:33.260102       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:50:33.260908       1 config.go:328] "Starting node config controller"
	I0916 10:50:33.260937       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0916 10:50:36.287039       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0916 10:50:36.287174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.286395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.287411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.288233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0916 10:50:37.161128       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:50:37.161227       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:50:37.560852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf] <==
	E0916 10:46:51.645690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:51.645901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:51.645976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:51.646054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:51.646084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.174791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.175108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.174878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:07.389661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:07.390930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:07.391344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:07.391805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:10.463790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:10.464149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:25.822162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:25.822276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:31.966129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:31.966261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:38.109734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:38.109808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:59.614733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:59.615035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704] <==
	W0916 10:50:27.106134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.106286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:27.917399       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.917565       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.353853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.353900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.362689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.362727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.539820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.539945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.172233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.172367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.247772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.247816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:32.800369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:50:32.801683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:50:32.801914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:50:32.802040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:50:43.636980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:52:48.001271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2v2jd\": pod busybox-7dff88458-2v2jd is already assigned to node \"ha-244475-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2v2jd" node="ha-244475-m04"
	E0916 10:52:48.002577       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ca60db2e-7e01-4fc9-ac6c-724930269681(default/busybox-7dff88458-2v2jd) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2v2jd"
	E0916 10:52:48.002701       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2v2jd\": pod busybox-7dff88458-2v2jd is already assigned to node \"ha-244475-m04\"" pod="default/busybox-7dff88458-2v2jd"
	I0916 10:52:48.002757       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2v2jd" node="ha-244475-m04"
	
	
	==> kube-scheduler [a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb] <==
	E0916 10:38:50.992011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.039856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:38:51.039907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.293677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:38:51.293783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:38:53.269920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:27.446213       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5" pod="default/busybox-7dff88458-7bhqg" assumedNode="ha-244475-m03" currentNode="ha-244475-m02"
	E0916 10:41:27.456948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m02"
	E0916 10:41:27.457071       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5(default/busybox-7dff88458-7bhqg) was assumed on ha-244475-m02 but assigned to ha-244475-m03" pod="default/busybox-7dff88458-7bhqg"
	E0916 10:41:27.457108       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" pod="default/busybox-7dff88458-7bhqg"
	I0916 10:41:27.457173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m03"
	E0916 10:47:54.234292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 10:47:55.101205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0916 10:47:55.243248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0916 10:47:56.250917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 10:47:56.495628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 10:47:57.140623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0916 10:47:57.973671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0916 10:47:58.028997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0916 10:48:01.831431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0916 10:48:02.285792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 10:48:02.396636       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0916 10:48:02.676356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 10:48:02.796464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0916 10:48:05.532040       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:51:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:51:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:51:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:51:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:51:52 ha-244475 kubelet[1309]: E0916 10:51:52.794964    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483912792713573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:51:52 ha-244475 kubelet[1309]: E0916 10:51:52.795001    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483912792713573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:02 ha-244475 kubelet[1309]: E0916 10:52:02.798106    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483922797600676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:02 ha-244475 kubelet[1309]: E0916 10:52:02.798162    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483922797600676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:12 ha-244475 kubelet[1309]: E0916 10:52:12.801213    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483932800328990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:12 ha-244475 kubelet[1309]: E0916 10:52:12.801259    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483932800328990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:22 ha-244475 kubelet[1309]: E0916 10:52:22.803704    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483942803252630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:22 ha-244475 kubelet[1309]: E0916 10:52:22.804115    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483942803252630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:32 ha-244475 kubelet[1309]: E0916 10:52:32.807821    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483952807181003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:32 ha-244475 kubelet[1309]: E0916 10:52:32.807869    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483952807181003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:42 ha-244475 kubelet[1309]: E0916 10:52:42.814819    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483962809980508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:42 ha-244475 kubelet[1309]: E0916 10:52:42.814886    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483962809980508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:52 ha-244475 kubelet[1309]: E0916 10:52:52.621260    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:52:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:52:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:52:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:52:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:52:52 ha-244475 kubelet[1309]: E0916 10:52:52.817306    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483972816753398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:52:52 ha-244475 kubelet[1309]: E0916 10:52:52.817549    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483972816753398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:53:02 ha-244475 kubelet[1309]: E0916 10:53:02.819325    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483982818475790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:53:02 ha-244475 kubelet[1309]: E0916 10:53:02.819361    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726483982818475790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:53:02.590146   30209 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (536.254µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (19.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 stop -v=7 --alsologtostderr: exit status 82 (2m0.456270831s)

                                                
                                                
-- stdout --
	* Stopping node "ha-244475-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:53:05.041537   30327 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:05.041808   30327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:05.041817   30327 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:05.041821   30327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:05.041989   30327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:53:05.042246   30327 out.go:352] Setting JSON to false
	I0916 10:53:05.042322   30327 mustload.go:65] Loading cluster: ha-244475
	I0916 10:53:05.042659   30327 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:53:05.042757   30327 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:53:05.042953   30327 mustload.go:65] Loading cluster: ha-244475
	I0916 10:53:05.043142   30327 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:53:05.043170   30327 stop.go:39] StopHost: ha-244475-m04
	I0916 10:53:05.043582   30327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:53:05.043625   30327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:53:05.059371   30327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0916 10:53:05.059891   30327 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:53:05.060463   30327 main.go:141] libmachine: Using API Version  1
	I0916 10:53:05.060487   30327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:53:05.060847   30327 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:53:05.063146   30327 out.go:177] * Stopping node "ha-244475-m04"  ...
	I0916 10:53:05.064319   30327 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 10:53:05.064351   30327 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:53:05.064545   30327 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 10:53:05.064572   30327 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:53:05.067091   30327 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:53:05.067461   30327 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:52:31 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:53:05.067491   30327 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:53:05.067607   30327 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:53:05.067774   30327 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:53:05.067910   30327 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:53:05.068049   30327 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	I0916 10:53:05.152303   30327 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 10:53:05.205661   30327 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 10:53:05.259477   30327 main.go:141] libmachine: Stopping "ha-244475-m04"...
	I0916 10:53:05.259522   30327 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:53:05.261180   30327 main.go:141] libmachine: (ha-244475-m04) Calling .Stop
	I0916 10:53:05.264679   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 0/120
	I0916 10:53:06.265910   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 1/120
	I0916 10:53:07.267558   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 2/120
	I0916 10:53:08.268768   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 3/120
	I0916 10:53:09.270239   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 4/120
	I0916 10:53:10.272202   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 5/120
	I0916 10:53:11.273505   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 6/120
	I0916 10:53:12.274778   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 7/120
	I0916 10:53:13.275952   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 8/120
	I0916 10:53:14.277140   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 9/120
	I0916 10:53:15.279093   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 10/120
	I0916 10:53:16.280439   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 11/120
	I0916 10:53:17.281527   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 12/120
	I0916 10:53:18.283587   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 13/120
	I0916 10:53:19.284649   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 14/120
	I0916 10:53:20.286341   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 15/120
	I0916 10:53:21.287663   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 16/120
	I0916 10:53:22.288997   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 17/120
	I0916 10:53:23.290437   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 18/120
	I0916 10:53:24.291740   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 19/120
	I0916 10:53:25.293825   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 20/120
	I0916 10:53:26.295491   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 21/120
	I0916 10:53:27.296739   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 22/120
	I0916 10:53:28.298687   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 23/120
	I0916 10:53:29.300021   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 24/120
	I0916 10:53:30.302411   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 25/120
	I0916 10:53:31.303759   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 26/120
	I0916 10:53:32.305184   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 27/120
	I0916 10:53:33.306591   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 28/120
	I0916 10:53:34.307852   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 29/120
	I0916 10:53:35.310081   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 30/120
	I0916 10:53:36.311559   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 31/120
	I0916 10:53:37.312744   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 32/120
	I0916 10:53:38.314038   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 33/120
	I0916 10:53:39.315637   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 34/120
	I0916 10:53:40.317266   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 35/120
	I0916 10:53:41.318403   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 36/120
	I0916 10:53:42.319765   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 37/120
	I0916 10:53:43.321201   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 38/120
	I0916 10:53:44.322470   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 39/120
	I0916 10:53:45.324423   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 40/120
	I0916 10:53:46.325893   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 41/120
	I0916 10:53:47.327402   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 42/120
	I0916 10:53:48.328980   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 43/120
	I0916 10:53:49.331114   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 44/120
	I0916 10:53:50.333347   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 45/120
	I0916 10:53:51.334754   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 46/120
	I0916 10:53:52.336573   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 47/120
	I0916 10:53:53.337724   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 48/120
	I0916 10:53:54.338926   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 49/120
	I0916 10:53:55.340992   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 50/120
	I0916 10:53:56.342355   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 51/120
	I0916 10:53:57.343669   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 52/120
	I0916 10:53:58.344822   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 53/120
	I0916 10:53:59.345989   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 54/120
	I0916 10:54:00.347777   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 55/120
	I0916 10:54:01.348985   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 56/120
	I0916 10:54:02.350279   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 57/120
	I0916 10:54:03.351550   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 58/120
	I0916 10:54:04.353661   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 59/120
	I0916 10:54:05.355562   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 60/120
	I0916 10:54:06.356768   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 61/120
	I0916 10:54:07.358117   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 62/120
	I0916 10:54:08.359310   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 63/120
	I0916 10:54:09.360497   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 64/120
	I0916 10:54:10.361755   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 65/120
	I0916 10:54:11.363154   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 66/120
	I0916 10:54:12.364415   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 67/120
	I0916 10:54:13.365676   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 68/120
	I0916 10:54:14.367569   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 69/120
	I0916 10:54:15.369370   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 70/120
	I0916 10:54:16.370589   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 71/120
	I0916 10:54:17.372531   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 72/120
	I0916 10:54:18.374412   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 73/120
	I0916 10:54:19.375682   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 74/120
	I0916 10:54:20.377390   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 75/120
	I0916 10:54:21.378545   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 76/120
	I0916 10:54:22.379852   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 77/120
	I0916 10:54:23.381278   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 78/120
	I0916 10:54:24.383446   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 79/120
	I0916 10:54:25.385446   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 80/120
	I0916 10:54:26.387824   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 81/120
	I0916 10:54:27.389141   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 82/120
	I0916 10:54:28.390493   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 83/120
	I0916 10:54:29.391846   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 84/120
	I0916 10:54:30.393434   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 85/120
	I0916 10:54:31.394605   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 86/120
	I0916 10:54:32.395730   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 87/120
	I0916 10:54:33.397573   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 88/120
	I0916 10:54:34.399473   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 89/120
	I0916 10:54:35.401323   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 90/120
	I0916 10:54:36.402634   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 91/120
	I0916 10:54:37.403902   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 92/120
	I0916 10:54:38.405440   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 93/120
	I0916 10:54:39.407503   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 94/120
	I0916 10:54:40.408738   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 95/120
	I0916 10:54:41.410281   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 96/120
	I0916 10:54:42.411539   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 97/120
	I0916 10:54:43.412883   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 98/120
	I0916 10:54:44.414273   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 99/120
	I0916 10:54:45.416770   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 100/120
	I0916 10:54:46.418882   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 101/120
	I0916 10:54:47.420167   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 102/120
	I0916 10:54:48.422090   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 103/120
	I0916 10:54:49.423604   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 104/120
	I0916 10:54:50.425632   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 105/120
	I0916 10:54:51.427895   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 106/120
	I0916 10:54:52.429403   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 107/120
	I0916 10:54:53.431561   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 108/120
	I0916 10:54:54.433289   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 109/120
	I0916 10:54:55.435131   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 110/120
	I0916 10:54:56.436374   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 111/120
	I0916 10:54:57.437690   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 112/120
	I0916 10:54:58.438946   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 113/120
	I0916 10:54:59.440319   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 114/120
	I0916 10:55:00.442203   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 115/120
	I0916 10:55:01.443431   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 116/120
	I0916 10:55:02.444688   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 117/120
	I0916 10:55:03.445928   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 118/120
	I0916 10:55:04.447458   30327 main.go:141] libmachine: (ha-244475-m04) Waiting for machine to stop 119/120
	I0916 10:55:05.448742   30327 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 10:55:05.448788   30327 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0916 10:55:05.450655   30327 out.go:201] 
	W0916 10:55:05.451996   30327 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0916 10:55:05.452013   30327 out.go:270] * 
	* 
	W0916 10:55:05.454248   30327 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:05.455506   30327 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-244475 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
E0916 10:55:08.821168   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr: exit status 3 (18.888860351s)

                                                
                                                
-- stdout --
	ha-244475
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-244475-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:05.498796   30755 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:05.498925   30755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:05.498936   30755 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:05.498942   30755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:05.499124   30755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:55:05.499311   30755 out.go:352] Setting JSON to false
	I0916 10:55:05.499347   30755 mustload.go:65] Loading cluster: ha-244475
	I0916 10:55:05.499423   30755 notify.go:220] Checking for updates...
	I0916 10:55:05.499808   30755 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:55:05.499826   30755 status.go:255] checking status of ha-244475 ...
	I0916 10:55:05.500236   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.500348   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.519682   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0916 10:55:05.520115   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.520679   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.520701   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.521045   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.521255   30755 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:55:05.522865   30755 status.go:330] ha-244475 host status = "Running" (err=<nil>)
	I0916 10:55:05.522882   30755 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:55:05.523147   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.523187   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.538696   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0916 10:55:05.539093   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.539601   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.539627   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.539923   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.540128   30755 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:55:05.542610   30755 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:05.543059   30755 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:05.543079   30755 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:05.543221   30755 host.go:66] Checking if "ha-244475" exists ...
	I0916 10:55:05.543506   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.543548   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.558098   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0916 10:55:05.558535   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.559041   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.559068   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.559411   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.559579   30755 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:55:05.559754   30755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:55:05.559780   30755 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:05.562713   30755 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:05.563132   30755 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:05.563159   30755 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:05.563278   30755 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:55:05.563436   30755 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:05.563554   30755 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:55:05.563654   30755 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:55:05.649752   30755 ssh_runner.go:195] Run: systemctl --version
	I0916 10:55:05.656625   30755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:55:05.674380   30755 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:55:05.674419   30755 api_server.go:166] Checking apiserver status ...
	I0916 10:55:05.674452   30755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:55:05.698152   30755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4962/cgroup
	W0916 10:55:05.707769   30755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4962/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:55:05.707841   30755 ssh_runner.go:195] Run: ls
	I0916 10:55:05.712394   30755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:55:05.718923   30755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:55:05.718944   30755 status.go:422] ha-244475 apiserver status = Running (err=<nil>)
	I0916 10:55:05.718952   30755 status.go:257] ha-244475 status: &{Name:ha-244475 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:55:05.718968   30755 status.go:255] checking status of ha-244475-m02 ...
	I0916 10:55:05.719256   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.719287   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.734813   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0916 10:55:05.735298   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.735839   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.735865   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.736174   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.736372   30755 main.go:141] libmachine: (ha-244475-m02) Calling .GetState
	I0916 10:55:05.737929   30755 status.go:330] ha-244475-m02 host status = "Running" (err=<nil>)
	I0916 10:55:05.737943   30755 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:55:05.738260   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.738310   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.753214   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0916 10:55:05.753749   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.754197   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.754221   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.754595   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.754814   30755 main.go:141] libmachine: (ha-244475-m02) Calling .GetIP
	I0916 10:55:05.757698   30755 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:55:05.758113   30755 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:49:55 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:55:05.758142   30755 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:55:05.758273   30755 host.go:66] Checking if "ha-244475-m02" exists ...
	I0916 10:55:05.758570   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.758618   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.773774   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0916 10:55:05.774211   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.774771   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.774804   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.775192   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.775405   30755 main.go:141] libmachine: (ha-244475-m02) Calling .DriverName
	I0916 10:55:05.775636   30755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:55:05.775663   30755 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHHostname
	I0916 10:55:05.778442   30755 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:55:05.778824   30755 main.go:141] libmachine: (ha-244475-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:fc:95", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:49:55 +0000 UTC Type:0 Mac:52:54:00:ed:fc:95 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-244475-m02 Clientid:01:52:54:00:ed:fc:95}
	I0916 10:55:05.778851   30755 main.go:141] libmachine: (ha-244475-m02) DBG | domain ha-244475-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ed:fc:95 in network mk-ha-244475
	I0916 10:55:05.779013   30755 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHPort
	I0916 10:55:05.779151   30755 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHKeyPath
	I0916 10:55:05.779292   30755 main.go:141] libmachine: (ha-244475-m02) Calling .GetSSHUsername
	I0916 10:55:05.779406   30755 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m02/id_rsa Username:docker}
	I0916 10:55:05.865749   30755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:55:05.884695   30755 kubeconfig.go:125] found "ha-244475" server: "https://192.168.39.254:8443"
	I0916 10:55:05.884721   30755 api_server.go:166] Checking apiserver status ...
	I0916 10:55:05.884761   30755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:55:05.901237   30755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1373/cgroup
	W0916 10:55:05.912029   30755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1373/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:55:05.912108   30755 ssh_runner.go:195] Run: ls
	I0916 10:55:05.916634   30755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:55:05.921002   30755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:55:05.921025   30755 status.go:422] ha-244475-m02 apiserver status = Running (err=<nil>)
	I0916 10:55:05.921035   30755 status.go:257] ha-244475-m02 status: &{Name:ha-244475-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:55:05.921053   30755 status.go:255] checking status of ha-244475-m04 ...
	I0916 10:55:05.921369   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.921411   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.936286   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I0916 10:55:05.936726   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.937257   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.937277   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.937583   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.937796   30755 main.go:141] libmachine: (ha-244475-m04) Calling .GetState
	I0916 10:55:05.939585   30755 status.go:330] ha-244475-m04 host status = "Running" (err=<nil>)
	I0916 10:55:05.939599   30755 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:55:05.939872   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.939903   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.954599   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0916 10:55:05.954990   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.955427   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.955446   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.955787   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.955977   30755 main.go:141] libmachine: (ha-244475-m04) Calling .GetIP
	I0916 10:55:05.958764   30755 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:55:05.959198   30755 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:52:31 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:55:05.959242   30755 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:55:05.959353   30755 host.go:66] Checking if "ha-244475-m04" exists ...
	I0916 10:55:05.959624   30755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:05.959657   30755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:05.974542   30755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I0916 10:55:05.975089   30755 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:05.975507   30755 main.go:141] libmachine: Using API Version  1
	I0916 10:55:05.975528   30755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:05.975825   30755 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:05.975996   30755 main.go:141] libmachine: (ha-244475-m04) Calling .DriverName
	I0916 10:55:05.976144   30755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:55:05.976167   30755 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHHostname
	I0916 10:55:05.978763   30755 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:55:05.979183   30755 main.go:141] libmachine: (ha-244475-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:b8:75", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:52:31 +0000 UTC Type:0 Mac:52:54:00:d1:b8:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-244475-m04 Clientid:01:52:54:00:d1:b8:75}
	I0916 10:55:05.979213   30755 main.go:141] libmachine: (ha-244475-m04) DBG | domain ha-244475-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:d1:b8:75 in network mk-ha-244475
	I0916 10:55:05.979325   30755 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHPort
	I0916 10:55:05.979470   30755 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHKeyPath
	I0916 10:55:05.979654   30755 main.go:141] libmachine: (ha-244475-m04) Calling .GetSSHUsername
	I0916 10:55:05.979784   30755 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475-m04/id_rsa Username:docker}
	W0916 10:55:24.345326   30755 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.110:22: connect: no route to host
	W0916 10:55:24.345438   30755 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host
	E0916 10:55:24.345467   30755 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host
	I0916 10:55:24.345479   30755 status.go:257] ha-244475-m04 status: &{Name:ha-244475-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:55:24.345502   30755 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.726448789s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m04 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp testdata/cp-test.txt                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m03 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-244475 node stop m02 -v=7                                                     | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-244475 node start m02 -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475 -v=7                                                           | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-244475 -v=7                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-244475 --wait=true -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	| node    | ha-244475 node delete m03 -v=7                                                   | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC | 16 Sep 24 10:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-244475 stop -v=7                                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:48:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:48:04.629611   28382 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:48:04.629751   28382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:48:04.629762   28382 out.go:358] Setting ErrFile to fd 2...
	I0916 10:48:04.629769   28382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:48:04.629972   28382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:48:04.630523   28382 out.go:352] Setting JSON to false
	I0916 10:48:04.631433   28382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1835,"bootTime":1726481850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:48:04.631527   28382 start.go:139] virtualization: kvm guest
	I0916 10:48:04.633814   28382 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:48:04.635027   28382 notify.go:220] Checking for updates...
	I0916 10:48:04.635032   28382 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:48:04.636319   28382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:48:04.637618   28382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:48:04.638937   28382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:48:04.640222   28382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:48:04.641463   28382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:48:04.643097   28382 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:48:04.643194   28382 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:48:04.643664   28382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:48:04.643720   28382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:48:04.660057   28382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0916 10:48:04.660593   28382 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:48:04.661160   28382 main.go:141] libmachine: Using API Version  1
	I0916 10:48:04.661198   28382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:48:04.661616   28382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:48:04.661813   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.697772   28382 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:48:04.699530   28382 start.go:297] selected driver: kvm2
	I0916 10:48:04.699547   28382 start.go:901] validating driver "kvm2" against &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:48:04.699689   28382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:48:04.700019   28382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:04.700102   28382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:48:04.715527   28382 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:48:04.716227   28382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:48:04.716263   28382 cni.go:84] Creating CNI manager for ""
	I0916 10:48:04.716312   28382 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:48:04.716367   28382 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:48:04.716493   28382 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:04.718937   28382 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:48:04.720335   28382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:48:04.720368   28382 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:48:04.720379   28382 cache.go:56] Caching tarball of preloaded images
	I0916 10:48:04.720467   28382 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:48:04.720479   28382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:48:04.720587   28382 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:48:04.720801   28382 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:48:04.720863   28382 start.go:364] duration metric: took 41.906µs to acquireMachinesLock for "ha-244475"
	I0916 10:48:04.720882   28382 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:48:04.720887   28382 fix.go:54] fixHost starting: 
	I0916 10:48:04.721282   28382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:48:04.721314   28382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:48:04.735751   28382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0916 10:48:04.736248   28382 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:48:04.736739   28382 main.go:141] libmachine: Using API Version  1
	I0916 10:48:04.736771   28382 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:48:04.737094   28382 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:48:04.737279   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.737431   28382 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:48:04.738886   28382 fix.go:112] recreateIfNeeded on ha-244475: state=Running err=<nil>
	W0916 10:48:04.738909   28382 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:48:04.740961   28382 out.go:177] * Updating the running kvm2 "ha-244475" VM ...
	I0916 10:48:04.742320   28382 machine.go:93] provisionDockerMachine start ...
	I0916 10:48:04.742348   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:48:04.742548   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:04.744733   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.745067   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:04.745093   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.745218   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:04.745382   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.745523   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.745653   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:04.745797   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:04.745999   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:04.746012   28382 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:48:04.866195   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:48:04.866247   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:04.866489   28382 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:48:04.866520   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:04.866739   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:04.869344   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.869776   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:04.869798   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:04.869969   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:04.870127   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.870289   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:04.870419   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:04.870579   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:04.870744   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:04.870756   28382 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:48:05.005091   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:48:05.005118   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.007741   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.008168   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.008192   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.008399   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.008580   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.008720   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.008818   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.008958   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:05.009165   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:05.009182   28382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:48:05.126206   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:48:05.126232   28382 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:48:05.126289   28382 buildroot.go:174] setting up certificates
	I0916 10:48:05.126297   28382 provision.go:84] configureAuth start
	I0916 10:48:05.126306   28382 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:48:05.126557   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:48:05.128973   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.129406   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.129434   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.129547   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.131762   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.132175   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.132198   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.132394   28382 provision.go:143] copyHostCerts
	I0916 10:48:05.132459   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:48:05.132520   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:48:05.132531   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:48:05.132608   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:48:05.132692   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:48:05.132709   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:48:05.132716   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:48:05.132739   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:48:05.132778   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:48:05.132795   28382 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:48:05.132803   28382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:48:05.132824   28382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:48:05.132867   28382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:48:05.230030   28382 provision.go:177] copyRemoteCerts
	I0916 10:48:05.230090   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:48:05.230124   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.232727   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.232996   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.233021   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.233228   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.233411   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.233854   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.233994   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:48:05.321368   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:48:05.321442   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:48:05.348483   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:48:05.348579   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:48:05.376610   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:48:05.376680   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:48:05.404845   28382 provision.go:87] duration metric: took 278.532484ms to configureAuth
	I0916 10:48:05.404874   28382 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:48:05.405088   28382 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:48:05.405170   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:48:05.407821   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.408170   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:48:05.408200   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:48:05.408395   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:48:05.408568   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.408725   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:48:05.408860   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:48:05.409024   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:05.409256   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:48:05.409278   28382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:49:36.136821   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:49:36.136864   28382 machine.go:96] duration metric: took 1m31.394528146s to provisionDockerMachine
	I0916 10:49:36.136875   28382 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:49:36.136885   28382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:36.136901   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.137195   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:36.137226   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.140151   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.140600   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.140633   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.140776   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.140974   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.141162   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.141297   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.229105   28382 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:49:36.233446   28382 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:49:36.233468   28382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:49:36.233521   28382 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:49:36.233595   28382 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:49:36.233605   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:49:36.233712   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:49:36.243379   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:49:36.268390   28382 start.go:296] duration metric: took 131.49973ms for postStartSetup
	I0916 10:49:36.268431   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.268704   28382 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 10:49:36.268740   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.271523   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.272009   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.272032   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.272177   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.272383   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.272533   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.272679   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	W0916 10:49:36.359589   28382 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 10:49:36.359614   28382 fix.go:56] duration metric: took 1m31.638727744s for fixHost
	I0916 10:49:36.359635   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.362024   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.362345   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.362379   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.362437   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.362603   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.362772   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.362934   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.363065   28382 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:36.363232   28382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:49:36.363242   28382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:49:36.478148   28382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726483776.445441321
	
	I0916 10:49:36.478178   28382 fix.go:216] guest clock: 1726483776.445441321
	I0916 10:49:36.478185   28382 fix.go:229] Guest: 2024-09-16 10:49:36.445441321 +0000 UTC Remote: 2024-09-16 10:49:36.359621457 +0000 UTC m=+91.765044121 (delta=85.819864ms)
	I0916 10:49:36.478209   28382 fix.go:200] guest clock delta is within tolerance: 85.819864ms
	I0916 10:49:36.478215   28382 start.go:83] releasing machines lock for "ha-244475", held for 1m31.757340687s
	I0916 10:49:36.478246   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.478464   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:49:36.480946   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.481304   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.481330   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.481512   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.481984   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.482250   28382 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:49:36.482367   28382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:49:36.482411   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.482451   28382 ssh_runner.go:195] Run: cat /version.json
	I0916 10:49:36.482475   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:49:36.485017   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485084   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485349   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.485372   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485438   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:36.485457   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:36.485482   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.485617   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.485706   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:49:36.485783   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.485830   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:49:36.485895   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.485941   28382 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:49:36.486045   28382 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:49:36.566130   28382 ssh_runner.go:195] Run: systemctl --version
	I0916 10:49:36.595210   28382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:49:36.759288   28382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:49:36.765378   28382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:36.765456   28382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:36.775556   28382 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:36.775578   28382 start.go:495] detecting cgroup driver to use...
	I0916 10:49:36.775647   28382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:49:36.791549   28382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:49:36.805408   28382 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:49:36.805456   28382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:49:36.819777   28382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:49:36.834041   28382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:49:37.006927   28382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:49:37.154158   28382 docker.go:233] disabling docker service ...
	I0916 10:49:37.154233   28382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:49:37.172237   28382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:49:37.187140   28382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:49:37.335249   28382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:49:37.485651   28382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:49:37.500949   28382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:37.520699   28382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:49:37.520778   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.532711   28382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:49:37.532779   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.545325   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.557100   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.568745   28382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:37.580983   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.592790   28382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.604166   28382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:49:37.615655   28382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:37.625740   28382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:37.636174   28382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:37.785177   28382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:49:42.995342   28382 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.210133239s)
	I0916 10:49:42.995373   28382 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:49:42.995414   28382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:49:43.001465   28382 start.go:563] Will wait 60s for crictl version
	I0916 10:49:43.001535   28382 ssh_runner.go:195] Run: which crictl
	I0916 10:49:43.005982   28382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:49:43.050539   28382 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:49:43.050628   28382 ssh_runner.go:195] Run: crio --version
	I0916 10:49:43.079811   28382 ssh_runner.go:195] Run: crio --version
	I0916 10:49:43.111377   28382 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:49:43.112594   28382 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:49:43.115110   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:43.115409   28382 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:49:43.115437   28382 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:49:43.115643   28382 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:43.120664   28382 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:43.120799   28382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:49:43.120843   28382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:43.174107   28382 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:49:43.174132   28382 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:49:43.174191   28382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:43.209963   28382 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:49:43.209985   28382 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:49:43.209995   28382 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:49:43.210109   28382 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:49:43.210169   28382 ssh_runner.go:195] Run: crio config
	I0916 10:49:43.257466   28382 cni.go:84] Creating CNI manager for ""
	I0916 10:49:43.257492   28382 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:49:43.257503   28382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:43.257526   28382 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:43.257697   28382 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:43.257719   28382 kube-vip.go:115] generating kube-vip config ...
	I0916 10:49:43.257765   28382 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:49:43.269960   28382 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:49:43.270094   28382 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:49:43.270162   28382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:43.280474   28382 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:43.280563   28382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:49:43.290395   28382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:49:43.307234   28382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:43.324085   28382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:49:43.340586   28382 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:49:43.357729   28382 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:43.363278   28382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:43.510012   28382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:49:43.525689   28382 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:49:43.525721   28382 certs.go:194] generating shared ca certs ...
	I0916 10:49:43.525742   28382 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.525902   28382 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:49:43.525940   28382 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:49:43.525952   28382 certs.go:256] generating profile certs ...
	I0916 10:49:43.526054   28382 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:49:43.526107   28382 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471
	I0916 10:49:43.526130   28382 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.127 192.168.39.254]
	I0916 10:49:43.615058   28382 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 ...
	I0916 10:49:43.615087   28382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471: {Name:mkdc1b4f93c1d0cf9ed7c134427449b54c119ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.615252   28382 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471 ...
	I0916 10:49:43.615262   28382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471: {Name:mk44f6b8e3053318a7781a0ded64dfd0c38e8870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:43.615328   28382 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.3a628471 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:49:43.615496   28382 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.3a628471 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:49:43.615629   28382 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:49:43.615643   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:49:43.615655   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:49:43.615668   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:43.615681   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:43.615693   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:43.615707   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:49:43.615722   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:43.615734   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:43.615788   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:49:43.615821   28382 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:43.615830   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:49:43.615855   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:49:43.615876   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:43.615897   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:43.615932   28382 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:49:43.615961   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.615976   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.615988   28382 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.616545   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:43.642550   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:49:43.666588   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:43.690999   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:43.715060   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:49:43.738836   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:43.762339   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:43.785649   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:49:43.809948   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:43.833383   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:49:43.856725   28382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:49:43.879989   28382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:43.897035   28382 ssh_runner.go:195] Run: openssl version
	I0916 10:49:43.902840   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:43.914400   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.919013   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.919075   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:43.925137   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:43.935417   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:49:43.946645   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.951098   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.951143   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:49:43.956794   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:43.966620   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:49:43.977946   28382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.982493   28382 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.982550   28382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:49:43.988245   28382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:43.998642   28382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:44.002978   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:44.008612   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:44.014304   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:44.019867   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:44.025979   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:44.032073   28382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:44.037852   28382 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:44.037973   28382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:49:44.038017   28382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:49:44.076228   28382 cri.go:89] found id: "acb3a9815a7d7d96bd398b1d8222524d573639530c35a82d60c88262c7f2a589"
	I0916 10:49:44.076248   28382 cri.go:89] found id: "539537ea4f2684d0513678c23e52eda87a874c01787a81c1ca77e0451fdb5b36"
	I0916 10:49:44.076252   28382 cri.go:89] found id: "996c12a7b1565febe9557aad65d9754e33c44d4a64678026aef5b63f3d99f1e0"
	I0916 10:49:44.076255   28382 cri.go:89] found id: "034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3"
	I0916 10:49:44.076257   28382 cri.go:89] found id: "7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465"
	I0916 10:49:44.076260   28382 cri.go:89] found id: "b16f64da09faea0d2ff3154541845e96ee1e6da2b018e77e618dcf0a3d246a99"
	I0916 10:49:44.076263   28382 cri.go:89] found id: "ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913"
	I0916 10:49:44.076265   28382 cri.go:89] found id: "6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf"
	I0916 10:49:44.076267   28382 cri.go:89] found id: "62c031e0ed0a9dd545dae65ca505f6ee4aa741ac7ab305ffd396ddb0a1faa045"
	I0916 10:49:44.076272   28382 cri.go:89] found id: "a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb"
	I0916 10:49:44.076275   28382 cri.go:89] found id: "13162d4bf94f734f5e68305ac96200c2f81bebc9b5ffbbfb3b0862980bc16fa1"
	I0916 10:49:44.076289   28382 cri.go:89] found id: "308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3"
	I0916 10:49:44.076292   28382 cri.go:89] found id: "f16e87fb57b2b25fed520499a45d5fd8f1e01c96a747662b637d0b1cc2e56113"
	I0916 10:49:44.076295   28382 cri.go:89] found id: ""
	I0916 10:49:44.076334   28382 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.003230917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484125003203668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af51953f-1ebd-4667-b336-76b6d28fe596 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.003915921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edb7cfca-762a-4290-aaec-2bd4f5c3268f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.004001336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edb7cfca-762a-4290-aaec-2bd4f5c3268f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.004491104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edb7cfca-762a-4290-aaec-2bd4f5c3268f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.052281989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b79f22d4-a607-4cde-8c44-ffc0879c43da name=/runtime.v1.RuntimeService/Version
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.052374875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b79f22d4-a607-4cde-8c44-ffc0879c43da name=/runtime.v1.RuntimeService/Version
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.053653304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f32b4afe-7e8f-4f61-a23a-27878772e2dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.054096070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484125054072140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f32b4afe-7e8f-4f61-a23a-27878772e2dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.054705889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=870f20f4-efca-46a0-8ba3-e2278fc8c01e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.054771581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=870f20f4-efca-46a0-8ba3-e2278fc8c01e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.055286543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=870f20f4-efca-46a0-8ba3-e2278fc8c01e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.106157377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26aa8d51-1a1d-413c-8b99-69b3a3583b0f name=/runtime.v1.RuntimeService/Version
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.106255610Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26aa8d51-1a1d-413c-8b99-69b3a3583b0f name=/runtime.v1.RuntimeService/Version
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.107760123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b156988-e38c-4894-bf78-cdbee190793b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.108189258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484125108164017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b156988-e38c-4894-bf78-cdbee190793b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.108634389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5257dd2d-21f7-4121-b3c5-93149a8280e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.108712168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5257dd2d-21f7-4121-b3c5-93149a8280e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.109125305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5257dd2d-21f7-4121-b3c5-93149a8280e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.159228443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eeac4e66-0ea7-44a9-8496-47bf55a14a56 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.159305659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eeac4e66-0ea7-44a9-8496-47bf55a14a56 name=/runtime.v1.RuntimeService/Version
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.160479062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=788af301-86ea-49db-a61d-aec8b1809f2f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.161093345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484125161068890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=788af301-86ea-49db-a61d-aec8b1809f2f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.161985050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5495033d-f5f4-4f0d-8bd5-cc6906cc7fa5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.162300619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5495033d-f5f4-4f0d-8bd5-cc6906cc7fa5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:55:25 ha-244475 crio[3700]: time="2024-09-16 10:55:25.163445926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726483854603167187,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726483838590275528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726483830592623652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726483823856894157,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19,PodSandboxId:0e78d323319d6c4d2718135672cccb5ed02b63104781d8d532f3c876fc39b96a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726483823027066020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726483805692389671,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a,PodSandboxId:3e70bdcf959532b2a9ea007abd776b8630e6092c8c5ae86c83bdd614c0a55f3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726483790620182717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726483790773221140,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790780262687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726483790578380077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726483790538342801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726483790688990513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6,PodSandboxId:35ef4979f7d506a9e9a9e9502fd7f40f644938e6f49a81ab8ad566cd13238560,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726483790509242730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0
ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726483790366687052,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c701fcd74abafe9b3ac301067c34d7098459eff6a0dfbe35f14fb698beab51b,PodSandboxId:ed1838f7506b4591d4ce8db6e7b3e3f56562a08f00651212eea0bb2549e4392c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483289055421398,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3,PodSandboxId:159730a21bea66d0374bcb65726757074bf5cb44a554b65743cbe055ce68f57d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151504308598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465,PodSandboxId:4d8c4f0a29bb7d9b84f11c3329413ddb9100b0afcf2bfdafe7d492249e8d29aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483151498590571,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913,PodSandboxId:9c8ab7a98f7497eec011967ba3c30c63e7f332a19b817633e32195e3f1c7958b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483138080721834,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf,PodSandboxId:3fbb7c8e9af7172c733fa210b0085f7297ed8632dadffa7ad36fcf99012b9a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483137842389587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb,PodSandboxId:42a76bc40dc3e2b105fd3eb70534ffb1703abed4e811e57f667c791cb8af94ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483126505949950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3,PodSandboxId:693cfec22177da2ee98748363dd5cc765290555de3c0bcc51d452517bd92abaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726483126351029784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5495033d-f5f4-4f0d-8bd5-cc6906cc7fa5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	392523616ed48       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   3                   0e78d323319d6       kube-controller-manager-ha-244475
	91491eba4d33b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       3                   3e70bdcf95953       storage-provisioner
	39bee169a2aff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   35ef4979f7d50       kube-apiserver-ha-244475
	2f00a03475d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   1eaacb088bf94       busybox-7dff88458-d4m5s
	c7904b48af0d5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   2                   0e78d323319d6       kube-controller-manager-ha-244475
	eff3d4b6ef1bb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   6203d6a2f83f4       kube-vip-ha-244475
	ba907061155c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   8cacdb30939e8       coredns-7c65d6cfc9-lzrg2
	6dd41088c8229       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   9ec606e5b45f0       kindnet-7v2cl
	3a6f1aac71418       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   2305599c1317d       coredns-7c65d6cfc9-m8fd7
	1da90c534a1ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       2                   3e70bdcf95953       storage-provisioner
	268d2527b9c98       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   194b56870a94a       etcd-ha-244475
	2ef7bc6ba1708       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   c308ac1286c4c       kube-proxy-crttt
	c692c6a18e99d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   35ef4979f7d50       kube-apiserver-ha-244475
	6c0110ceab6a6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   bd9f73d3e8d55       kube-scheduler-ha-244475
	5c701fcd74aba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   ed1838f7506b4       busybox-7dff88458-d4m5s
	034030626ec02       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      16 minutes ago      Exited              coredns                   0                   159730a21bea6       coredns-7c65d6cfc9-m8fd7
	7f78c5e4a3a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      16 minutes ago      Exited              coredns                   0                   4d8c4f0a29bb7       coredns-7c65d6cfc9-lzrg2
	ac63170bf5bb3       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago      Exited              kindnet-cni               0                   9c8ab7a98f749       kindnet-7v2cl
	6e6d69b26d5c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago      Exited              kube-proxy                0                   3fbb7c8e9af71       kube-proxy-crttt
	a0223669288e2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   42a76bc40dc3e       kube-scheduler-ha-244475
	308650af833f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   693cfec22177d       etcd-ha-244475
	
	
	==> coredns [034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3] <==
	[INFO] 10.244.2.2:42931 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200783s
	[INFO] 10.244.0.4:33694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014309s
	[INFO] 10.244.0.4:35532 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107639s
	[INFO] 10.244.0.4:53168 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009525s
	[INFO] 10.244.0.4:50253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001250965s
	[INFO] 10.244.0.4:40357 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089492s
	[INFO] 10.244.1.2:49152 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001985919s
	[INFO] 10.244.1.2:50396 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132748s
	[INFO] 10.244.2.2:38313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000951s
	[INFO] 10.244.0.4:43336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168268s
	[INFO] 10.244.0.4:44949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123895s
	[INFO] 10.244.0.4:52348 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107748s
	[INFO] 10.244.1.2:36649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286063s
	[INFO] 10.244.1.2:42747 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082265s
	[INFO] 10.244.2.2:45891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018425s
	[INFO] 10.244.2.2:53625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126302s
	[INFO] 10.244.2.2:44397 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109098s
	[INFO] 10.244.0.4:39956 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013935s
	[INFO] 10.244.0.4:39139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008694s
	[INFO] 10.244.0.4:38933 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060589s
	[INFO] 10.244.1.2:36849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146451s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465] <==
	[INFO] 10.244.2.2:52615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191836s
	[INFO] 10.244.2.2:49834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166519s
	[INFO] 10.244.2.2:39495 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127494s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001694487s
	[INFO] 10.244.0.4:36178 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091958s
	[INFO] 10.244.0.4:33247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160731s
	[INFO] 10.244.1.2:52512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150889s
	[INFO] 10.244.1.2:43450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182534s
	[INFO] 10.244.1.2:56403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150359s
	[INFO] 10.244.1.2:51246 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230547s
	[INFO] 10.244.1.2:39220 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090721s
	[INFO] 10.244.1.2:41766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155057s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153103s
	[INFO] 10.244.2.2:44469 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099361s
	[INFO] 10.244.2.2:52465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086382s
	[INFO] 10.244.0.4:36474 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117775s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142151s
	[INFO] 10.244.1.2:39272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113629s
	[INFO] 10.244.2.2:43223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141566s
	[INFO] 10.244.0.4:36502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000282073s
	[INFO] 10.244.1.2:60302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207499s
	[INFO] 10.244.1.2:49950 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184993s
	[INFO] 10.244.1.2:54052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094371s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:53:56 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:53:56 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:53:56 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:53:56 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m52s                  kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Warning  ContainerGCFailed        6m33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m56s (x3 over 6m45s)  kubelet          Node ha-244475 status is now: NodeNotReady
	  Normal   RegisteredNode           4m53s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           4m29s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           3m20s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   NodeNotReady             113s                   node-controller  Node ha-244475 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     89s (x2 over 16m)      kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeReady                89s (x2 over 16m)      kubelet          Node ha-244475 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    89s (x2 over 16m)      kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  89s (x2 over 16m)      kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:55:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:51:17 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d493ff2b-8d16-4f12-976a-cc277283240e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-244475-m02 status is now: NodeNotReady
	  Normal  Starting                 5m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:53:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:53:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:53:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 10:52:36 +0000   Mon, 16 Sep 2024 10:53:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2v2jd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kindnet-dflt4              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-kp7hv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m53s                  node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           4m29s                  node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           3m20s                  node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-244475-m04 has been rebooted, boot id: 17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m49s                  kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s (x2 over 4m13s)   node-controller  Node ha-244475-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.139824] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054792] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058211] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.173707] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.144769] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.277555] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.915448] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.568561] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067639] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.970048] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:49] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.157093] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.177936] systemd-fstab-generator[3650]: Ignoring "noauto" option for root device
	[  +0.142086] systemd-fstab-generator[3662]: Ignoring "noauto" option for root device
	[  +0.308892] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +5.722075] systemd-fstab-generator[3786]: Ignoring "noauto" option for root device
	[  +0.089630] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.518650] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 10:50] kauditd_printk_skb: 85 callbacks suppressed
	[  +6.619080] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.373360] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791] <==
	{"level":"info","ts":"2024-09-16T10:51:57.355389Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.355480Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.374066Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:51:57.374564Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:51:57.374470Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"e16a89b9eb3a3bb1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:51:57.374795Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.569761Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.127:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-16T10:52:49.591821Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.127:33328","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-16T10:52:49.604460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035 17357719710197446810)"}
	{"level":"info","ts":"2024-09-16T10:52:49.606526Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","removed-remote-peer-id":"e16a89b9eb3a3bb1","removed-remote-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-09-16T10:52:49.606658Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.606988Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.607057Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.607462Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.607621Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.607729Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.608064Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T10:52:49.608120Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e16a89b9eb3a3bb1","error":"failed to read e16a89b9eb3a3bb1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T10:52:49.608150Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.608291Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-09-16T10:52:49.608352Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.608369Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:52:49.608382Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"683e1d26ac7e3123","removed-remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.620732Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"683e1d26ac7e3123","remote-peer-id-stream-handler":"683e1d26ac7e3123","remote-peer-id-from":"e16a89b9eb3a3bb1"}
	{"level":"warn","ts":"2024-09-16T10:52:49.629988Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"683e1d26ac7e3123","remote-peer-id-stream-handler":"683e1d26ac7e3123","remote-peer-id-from":"e16a89b9eb3a3bb1"}
	
	
	==> etcd [308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3] <==
	2024/09/16 10:48:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/16 10:48:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T10:48:05.622322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:48:05.622416Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:48:05.622669Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"683e1d26ac7e3123","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T10:48:05.622908Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.622994Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623020Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623337Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623408Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623563Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623672Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:48:05.623695Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623706Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.623941Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624005Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624132Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.624228Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e16a89b9eb3a3bb1"}
	{"level":"info","ts":"2024-09-16T10:48:05.627878Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"warn","ts":"2024-09-16T10:48:05.627901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.872293306s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T10:48:05.627999Z","caller":"traceutil/trace.go:171","msg":"trace[183528881] range","detail":"{range_begin:; range_end:; }","duration":"8.872408831s","start":"2024-09-16T10:47:56.755582Z","end":"2024-09-16T10:48:05.627991Z","steps":["trace[183528881] 'agreement among raft nodes before linearized reading'  (duration: 8.872291909s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:48:05.628057Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2024-09-16T10:48:05.628086Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-244475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	{"level":"error","ts":"2024-09-16T10:48:05.628066Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 10:55:25 up 17 min,  0 users,  load average: 0.02, 0.31, 0.27
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f] <==
	I0916 10:54:42.120299       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:54:52.112958       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:54:52.113065       1 main.go:299] handling current node
	I0916 10:54:52.113093       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:54:52.113099       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:54:52.113252       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:54:52.113277       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:02.113249       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:55:02.113295       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:02.113438       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:55:02.113445       1 main.go:299] handling current node
	I0916 10:55:02.113455       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:55:02.113459       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:12.114724       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:55:12.114843       1 main.go:299] handling current node
	I0916 10:55:12.114872       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:55:12.114889       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:12.115021       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:55:12.115042       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:22.117464       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:55:22.117694       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:22.117925       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:55:22.117966       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:22.118053       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:55:22.118079       1 main.go:299] handling current node
	
	
	==> kindnet [ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913] <==
	I0916 10:47:29.301243       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:39.301433       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:39.301612       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:39.301782       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:39.301808       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:39.301866       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:39.301885       1 main.go:299] handling current node
	I0916 10:47:39.301906       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:39.301922       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:49.306310       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:49.306426       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:49.306666       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:49.306700       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:49.306797       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:49.306818       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:49.306872       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:49.306891       1 main.go:299] handling current node
	I0916 10:47:59.300973       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:47:59.301025       1 main.go:299] handling current node
	I0916 10:47:59.301052       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:47:59.301057       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:59.301226       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0916 10:47:59.301291       1 main.go:322] Node ha-244475-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:59.301343       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:47:59.301365       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d] <==
	I0916 10:50:32.786625       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:50:32.786726       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:50:32.870064       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:50:32.874061       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:50:32.874378       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:50:32.874471       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:50:32.878579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:50:32.878785       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:50:32.880011       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:50:32.880175       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:50:32.880401       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:50:32.880639       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:50:32.881359       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:50:32.881448       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:50:32.881854       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:50:32.883414       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:50:32.885333       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:50:32.885366       1 policy_source.go:224] refreshing policies
	W0916 10:50:32.891110       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.222]
	I0916 10:50:32.892579       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:50:32.900075       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 10:50:32.909150       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 10:50:32.968716       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:50:33.778275       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:50:34.130805       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	
	
	==> kube-apiserver [c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6] <==
	I0916 10:49:51.303646       1 options.go:228] external host was not specified, using 192.168.39.19
	I0916 10:49:51.307873       1 server.go:142] Version: v1.31.1
	I0916 10:49:51.309597       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:51.809274       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 10:49:51.821629       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 10:49:51.821673       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 10:49:51.821966       1 instance.go:232] Using reconciler: lease
	I0916 10:49:51.822581       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0916 10:50:11.808376       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 10:50:11.808683       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 10:50:11.823458       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0916 10:50:11.823720       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af] <==
	I0916 10:53:32.251332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="132.633µs"
	I0916 10:53:32.251620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.79363ms"
	I0916 10:53:32.251790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.144µs"
	I0916 10:53:32.669230       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	E0916 10:53:36.951153       1 gc_controller.go:151] "Failed to get node" err="node \"ha-244475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-244475-m03"
	E0916 10:53:36.951262       1 gc_controller.go:151] "Failed to get node" err="node \"ha-244475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-244475-m03"
	E0916 10:53:36.951290       1 gc_controller.go:151] "Failed to get node" err="node \"ha-244475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-244475-m03"
	E0916 10:53:36.951317       1 gc_controller.go:151] "Failed to get node" err="node \"ha-244475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-244475-m03"
	E0916 10:53:36.951340       1 gc_controller.go:151] "Failed to get node" err="node \"ha-244475-m03\" not found" logger="pod-garbage-collector-controller" node="ha-244475-m03"
	I0916 10:53:37.425609       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	I0916 10:53:37.590663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:53:37.617747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:53:37.636918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.292524ms"
	I0916 10:53:37.638288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.794µs"
	I0916 10:53:42.747925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:53:47.503128       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:53:52.706767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.717629ms"
	I0916 10:53:52.706870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.89µs"
	I0916 10:53:52.783202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.346415ms"
	I0916 10:53:52.783345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.739µs"
	I0916 10:53:52.929201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="24.907418ms"
	I0916 10:53:52.930575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="223.362µs"
	I0916 10:53:56.771558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	I0916 10:53:56.788004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	I0916 10:53:57.326926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	
	
	==> kube-controller-manager [c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19] <==
	I0916 10:50:23.787937       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:50:24.055825       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:50:24.055866       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:24.057300       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:50:24.057381       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:50:24.057621       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:50:24.057818       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:50:34.061987       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed] <==
	E0916 10:50:33.213866       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-244475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 10:50:33.214328       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0916 10:50:33.214618       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:50:33.254822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:50:33.254898       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:50:33.254936       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:50:33.257890       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:50:33.258306       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:50:33.258342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:33.259973       1 config.go:199] "Starting service config controller"
	I0916 10:50:33.260036       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:50:33.260076       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:50:33.260102       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:50:33.260908       1 config.go:328] "Starting node config controller"
	I0916 10:50:33.260937       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0916 10:50:36.287039       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0916 10:50:36.287174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.286395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.287411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.288233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0916 10:50:37.161128       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:50:37.161227       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:50:37.560852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf] <==
	E0916 10:46:51.645690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:51.645901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:51.645976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:51.646054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:51.646084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.174791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.175108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:46:58.174878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:46:58.175188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:07.389661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:07.390930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:07.391344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:07.391805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:10.463790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:10.464149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:25.822162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:25.822276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1889\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:31.966129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:31.966261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:38.109734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:38.109808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:47:59.614733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:47:59.615035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704] <==
	W0916 10:50:27.106134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.106286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:27.917399       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.917565       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.353853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.353900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.362689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.362727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.539820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.539945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.172233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.172367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.247772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.247816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:32.800369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:50:32.801683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:50:32.801914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:50:32.802040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:50:43.636980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:52:48.001271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2v2jd\": pod busybox-7dff88458-2v2jd is already assigned to node \"ha-244475-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2v2jd" node="ha-244475-m04"
	E0916 10:52:48.002577       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ca60db2e-7e01-4fc9-ac6c-724930269681(default/busybox-7dff88458-2v2jd) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2v2jd"
	E0916 10:52:48.002701       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2v2jd\": pod busybox-7dff88458-2v2jd is already assigned to node \"ha-244475-m04\"" pod="default/busybox-7dff88458-2v2jd"
	I0916 10:52:48.002757       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2v2jd" node="ha-244475-m04"
	
	
	==> kube-scheduler [a0223669288e248ff9be2473e59f09af3156f136f2627463151e0bc52191ddfb] <==
	E0916 10:38:50.992011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.039856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:38:51.039907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:38:51.293677       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:38:51.293783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:38:53.269920       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:27.446213       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5" pod="default/busybox-7dff88458-7bhqg" assumedNode="ha-244475-m03" currentNode="ha-244475-m02"
	E0916 10:41:27.456948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m02"
	E0916 10:41:27.457071       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e6b78c3-ae2c-4cff-b2cf-fd0f08d53fa5(default/busybox-7dff88458-7bhqg) was assumed on ha-244475-m02 but assigned to ha-244475-m03" pod="default/busybox-7dff88458-7bhqg"
	E0916 10:41:27.457108       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7bhqg\": pod busybox-7dff88458-7bhqg is already assigned to node \"ha-244475-m03\"" pod="default/busybox-7dff88458-7bhqg"
	I0916 10:41:27.457173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7bhqg" node="ha-244475-m03"
	E0916 10:47:54.234292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 10:47:55.101205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0916 10:47:55.243248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0916 10:47:56.250917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 10:47:56.495628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 10:47:57.140623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0916 10:47:57.973671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0916 10:47:58.028997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0916 10:48:01.831431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0916 10:48:02.285792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 10:48:02.396636       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0916 10:48:02.676356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 10:48:02.796464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0916 10:48:05.532040       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:53:52 ha-244475 kubelet[1309]: E0916 10:53:52.832993    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484032832433431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:53:52 ha-244475 kubelet[1309]: E0916 10:53:52.833049    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484032832433431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:02 ha-244475 kubelet[1309]: E0916 10:54:02.835170    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484042834782358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:02 ha-244475 kubelet[1309]: E0916 10:54:02.835644    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484042834782358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:12 ha-244475 kubelet[1309]: E0916 10:54:12.837971    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484052837354925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:12 ha-244475 kubelet[1309]: E0916 10:54:12.838347    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484052837354925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:22 ha-244475 kubelet[1309]: E0916 10:54:22.841082    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484062840703672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:22 ha-244475 kubelet[1309]: E0916 10:54:22.841543    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484062840703672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:32 ha-244475 kubelet[1309]: E0916 10:54:32.845120    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484072843690762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:32 ha-244475 kubelet[1309]: E0916 10:54:32.845212    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484072843690762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:42 ha-244475 kubelet[1309]: E0916 10:54:42.846991    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484082846361829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:42 ha-244475 kubelet[1309]: E0916 10:54:42.847276    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484082846361829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:52 ha-244475 kubelet[1309]: E0916 10:54:52.622109    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:54:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:54:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:54:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:54:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:54:52 ha-244475 kubelet[1309]: E0916 10:54:52.849872    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484092849205479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:54:52 ha-244475 kubelet[1309]: E0916 10:54:52.849973    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484092849205479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:02 ha-244475 kubelet[1309]: E0916 10:55:02.852472    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484102852105669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:02 ha-244475 kubelet[1309]: E0916 10:55:02.853050    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484102852105669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:12 ha-244475 kubelet[1309]: E0916 10:55:12.855875    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484112855094804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:12 ha-244475 kubelet[1309]: E0916 10:55:12.856246    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484112855094804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:22 ha-244475 kubelet[1309]: E0916 10:55:22.864020    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484122857446093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:55:22 ha-244475 kubelet[1309]: E0916 10:55:22.864444    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484122857446093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:55:24.671476   30901 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (474.368µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (271.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-244475 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 10:56:28.278378   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:57:51.342877   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-244475 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m28.477773914s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:584: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (545.314µs)
ha_test.go:586: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-244475 -n ha-244475
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-244475 logs -n 25: (1.762637845s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m04 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp testdata/cp-test.txt                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt                       |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475 sudo cat                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475.txt                                 |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m02 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n                                                                 | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | ha-244475-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-244475 ssh -n ha-244475-m03 sudo cat                                          | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-244475 node stop m02 -v=7                                                     | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-244475 node start m02 -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475 -v=7                                                           | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-244475 -v=7                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-244475 --wait=true -v=7                                                    | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-244475                                                                | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	| node    | ha-244475 node delete m03 -v=7                                                   | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC | 16 Sep 24 10:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-244475 stop -v=7                                                              | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-244475 --wait=true                                                         | ha-244475 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:59 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:55:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:55:26.700481   30972 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:26.700872   30972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:26.700928   30972 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:26.700949   30972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:26.701412   30972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:55:26.702399   30972 out.go:352] Setting JSON to false
	I0916 10:55:26.703350   30972 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2277,"bootTime":1726481850,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:55:26.703451   30972 start.go:139] virtualization: kvm guest
	I0916 10:55:26.705480   30972 out.go:177] * [ha-244475] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:55:26.706942   30972 notify.go:220] Checking for updates...
	I0916 10:55:26.706948   30972 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:55:26.708314   30972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:26.709631   30972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:55:26.711033   30972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:55:26.712365   30972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:55:26.713668   30972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:26.715386   30972 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:55:26.715795   30972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:26.715840   30972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:26.731909   30972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I0916 10:55:26.732320   30972 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:26.733009   30972 main.go:141] libmachine: Using API Version  1
	I0916 10:55:26.733037   30972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:26.733356   30972 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:26.733535   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:55:26.733818   30972 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:26.734136   30972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:26.734197   30972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:26.749167   30972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0916 10:55:26.749660   30972 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:26.750105   30972 main.go:141] libmachine: Using API Version  1
	I0916 10:55:26.750128   30972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:26.750452   30972 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:26.750585   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:55:26.786804   30972 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:55:26.787951   30972 start.go:297] selected driver: kvm2
	I0916 10:55:26.787962   30972 start.go:901] validating driver "kvm2" against &{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:26.788146   30972 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:26.788490   30972 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:26.788561   30972 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:55:26.803237   30972 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:55:26.803928   30972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:26.803960   30972 cni.go:84] Creating CNI manager for ""
	I0916 10:55:26.803997   30972 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:55:26.804061   30972 start.go:340] cluster config:
	{Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:26.804212   30972 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:26.806041   30972 out.go:177] * Starting "ha-244475" primary control-plane node in "ha-244475" cluster
	I0916 10:55:26.807255   30972 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:55:26.807287   30972 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 10:55:26.807294   30972 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:26.807353   30972 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 10:55:26.807364   30972 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 10:55:26.807502   30972 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/config.json ...
	I0916 10:55:26.807697   30972 start.go:360] acquireMachinesLock for ha-244475: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:26.807739   30972 start.go:364] duration metric: took 24.73µs to acquireMachinesLock for "ha-244475"
	I0916 10:55:26.807767   30972 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:26.807774   30972 fix.go:54] fixHost starting: 
	I0916 10:55:26.808029   30972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:55:26.808064   30972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:55:26.823188   30972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36589
	I0916 10:55:26.823581   30972 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:55:26.824078   30972 main.go:141] libmachine: Using API Version  1
	I0916 10:55:26.824102   30972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:55:26.824394   30972 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:55:26.824560   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:55:26.824683   30972 main.go:141] libmachine: (ha-244475) Calling .GetState
	I0916 10:55:26.826209   30972 fix.go:112] recreateIfNeeded on ha-244475: state=Running err=<nil>
	W0916 10:55:26.826227   30972 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:26.828194   30972 out.go:177] * Updating the running kvm2 "ha-244475" VM ...
	I0916 10:55:26.829434   30972 machine.go:93] provisionDockerMachine start ...
	I0916 10:55:26.829454   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:55:26.829636   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:26.831818   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:26.832203   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:26.832229   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:26.832335   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:55:26.832485   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:26.832623   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:26.832815   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:55:26.832952   30972 main.go:141] libmachine: Using SSH client type: native
	I0916 10:55:26.833214   30972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:55:26.833226   30972 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:55:26.949590   30972 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:55:26.949621   30972 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:55:26.949860   30972 buildroot.go:166] provisioning hostname "ha-244475"
	I0916 10:55:26.949890   30972 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:55:26.950063   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:26.952380   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:26.952734   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:26.952753   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:26.952878   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:55:26.953026   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:26.953179   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:26.953277   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:55:26.953424   30972 main.go:141] libmachine: Using SSH client type: native
	I0916 10:55:26.953604   30972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:55:26.953631   30972 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-244475 && echo "ha-244475" | sudo tee /etc/hostname
	I0916 10:55:27.080039   30972 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-244475
	
	I0916 10:55:27.080065   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:27.082983   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.083315   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:27.083348   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.083498   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:55:27.083702   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:27.083860   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:27.083987   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:55:27.084118   30972 main.go:141] libmachine: Using SSH client type: native
	I0916 10:55:27.084290   30972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:55:27.084306   30972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-244475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-244475/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-244475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:55:27.198506   30972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:55:27.198540   30972 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 10:55:27.198564   30972 buildroot.go:174] setting up certificates
	I0916 10:55:27.198576   30972 provision.go:84] configureAuth start
	I0916 10:55:27.198589   30972 main.go:141] libmachine: (ha-244475) Calling .GetMachineName
	I0916 10:55:27.198928   30972 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:55:27.201486   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.201801   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:27.201828   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.201996   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:27.204108   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.204442   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:27.204460   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.204596   30972 provision.go:143] copyHostCerts
	I0916 10:55:27.204625   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:55:27.204679   30972 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 10:55:27.204691   30972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 10:55:27.204756   30972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 10:55:27.204840   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:55:27.204863   30972 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 10:55:27.204870   30972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 10:55:27.204894   30972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 10:55:27.204946   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:55:27.204963   30972 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 10:55:27.204969   30972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 10:55:27.204989   30972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 10:55:27.205046   30972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.ha-244475 san=[127.0.0.1 192.168.39.19 ha-244475 localhost minikube]
	I0916 10:55:27.354112   30972 provision.go:177] copyRemoteCerts
	I0916 10:55:27.354183   30972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:55:27.354211   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:27.356907   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.357277   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:27.357326   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.357563   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:55:27.357775   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:27.357952   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:55:27.358089   30972 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:55:27.448882   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:55:27.448963   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:55:27.474982   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:55:27.475080   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 10:55:27.500455   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:55:27.500523   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:55:27.526148   30972 provision.go:87] duration metric: took 327.557601ms to configureAuth
	I0916 10:55:27.526181   30972 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:55:27.526442   30972 config.go:182] Loaded profile config "ha-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:55:27.526521   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:55:27.529046   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.529405   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:55:27.529435   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:55:27.529647   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:55:27.529838   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:27.529989   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:55:27.530113   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:55:27.530247   30972 main.go:141] libmachine: Using SSH client type: native
	I0916 10:55:27.530430   30972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:55:27.530449   30972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 10:57:02.164508   30972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 10:57:02.164535   30972 machine.go:96] duration metric: took 1m35.335088287s to provisionDockerMachine
	I0916 10:57:02.164546   30972 start.go:293] postStartSetup for "ha-244475" (driver="kvm2")
	I0916 10:57:02.164556   30972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:57:02.164572   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:57:02.164866   30972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:57:02.164897   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:57:02.168134   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.168625   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:02.168651   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.168817   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:57:02.168989   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:57:02.169151   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:57:02.169259   30972 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:57:02.256537   30972 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:57:02.260839   30972 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:57:02.260862   30972 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 10:57:02.260915   30972 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 10:57:02.260991   30972 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 10:57:02.261000   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 10:57:02.261082   30972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:57:02.270820   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:57:02.295635   30972 start.go:296] duration metric: took 131.073628ms for postStartSetup
	I0916 10:57:02.295685   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:57:02.295956   30972 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 10:57:02.295982   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:57:02.298670   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.298999   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:02.299024   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.299211   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:57:02.299375   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:57:02.299501   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:57:02.299671   30972 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	W0916 10:57:02.389443   30972 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 10:57:02.389469   30972 fix.go:56] duration metric: took 1m35.581694924s for fixHost
	I0916 10:57:02.389490   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:57:02.392057   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.392439   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:02.392470   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.392629   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:57:02.392776   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:57:02.392908   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:57:02.393015   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:57:02.393179   30972 main.go:141] libmachine: Using SSH client type: native
	I0916 10:57:02.393340   30972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0916 10:57:02.393350   30972 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:57:02.510184   30972 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726484222.473678532
	
	I0916 10:57:02.510206   30972 fix.go:216] guest clock: 1726484222.473678532
	I0916 10:57:02.510213   30972 fix.go:229] Guest: 2024-09-16 10:57:02.473678532 +0000 UTC Remote: 2024-09-16 10:57:02.389477487 +0000 UTC m=+95.725645060 (delta=84.201045ms)
	I0916 10:57:02.510231   30972 fix.go:200] guest clock delta is within tolerance: 84.201045ms
	I0916 10:57:02.510236   30972 start.go:83] releasing machines lock for "ha-244475", held for 1m35.702488273s
	I0916 10:57:02.510254   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:57:02.510463   30972 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:57:02.512917   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.513256   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:02.513282   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.513458   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:57:02.513943   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:57:02.514110   30972 main.go:141] libmachine: (ha-244475) Calling .DriverName
	I0916 10:57:02.514212   30972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:57:02.514259   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:57:02.514307   30972 ssh_runner.go:195] Run: cat /version.json
	I0916 10:57:02.514330   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHHostname
	I0916 10:57:02.516813   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.517174   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:02.517198   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.517217   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.517310   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:57:02.517466   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:57:02.517593   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:57:02.517644   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:02.517668   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:02.517752   30972 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:57:02.517861   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHPort
	I0916 10:57:02.518000   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHKeyPath
	I0916 10:57:02.518139   30972 main.go:141] libmachine: (ha-244475) Calling .GetSSHUsername
	I0916 10:57:02.518271   30972 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/ha-244475/id_rsa Username:docker}
	I0916 10:57:02.599662   30972 ssh_runner.go:195] Run: systemctl --version
	I0916 10:57:02.627509   30972 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 10:57:02.794368   30972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:57:02.813967   30972 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:57:02.814037   30972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:57:02.830305   30972 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:57:02.830331   30972 start.go:495] detecting cgroup driver to use...
	I0916 10:57:02.830398   30972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:57:02.862040   30972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:57:02.876271   30972 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:57:02.876328   30972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:57:02.891960   30972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:57:02.906972   30972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:57:03.082468   30972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:57:03.238887   30972 docker.go:233] disabling docker service ...
	I0916 10:57:03.238957   30972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:57:03.255448   30972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:57:03.269296   30972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:57:03.422807   30972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:57:03.579104   30972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:57:03.593679   30972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:57:03.613965   30972 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 10:57:03.614035   30972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.624991   30972 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 10:57:03.625064   30972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.635494   30972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.645973   30972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.656522   30972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:57:03.667399   30972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.677897   30972 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.690114   30972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 10:57:03.700791   30972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:57:03.711218   30972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:57:03.721085   30972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:57:03.885802   30972 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 10:57:10.645398   30972 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.759562142s)
	I0916 10:57:10.645428   30972 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 10:57:10.645488   30972 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 10:57:10.650679   30972 start.go:563] Will wait 60s for crictl version
	I0916 10:57:10.650749   30972 ssh_runner.go:195] Run: which crictl
	I0916 10:57:10.655131   30972 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:57:10.700157   30972 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 10:57:10.700236   30972 ssh_runner.go:195] Run: crio --version
	I0916 10:57:10.730774   30972 ssh_runner.go:195] Run: crio --version
	I0916 10:57:10.760995   30972 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 10:57:10.762391   30972 main.go:141] libmachine: (ha-244475) Calling .GetIP
	I0916 10:57:10.764788   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:10.765146   30972 main.go:141] libmachine: (ha-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:d1:43", ip: ""} in network mk-ha-244475: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:26 +0000 UTC Type:0 Mac:52:54:00:31:d1:43 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-244475 Clientid:01:52:54:00:31:d1:43}
	I0916 10:57:10.765174   30972 main.go:141] libmachine: (ha-244475) DBG | domain ha-244475 has defined IP address 192.168.39.19 and MAC address 52:54:00:31:d1:43 in network mk-ha-244475
	I0916 10:57:10.765337   30972 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:57:10.770577   30972 kubeadm.go:883] updating cluster {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:57:10.770708   30972 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 10:57:10.770751   30972 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:57:10.814943   30972 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:57:10.814967   30972 crio.go:433] Images already preloaded, skipping extraction
	I0916 10:57:10.815031   30972 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:57:10.849887   30972 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 10:57:10.849910   30972 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:57:10.849917   30972 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.1 crio true true} ...
	I0916 10:57:10.850091   30972 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-244475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:57:10.850172   30972 ssh_runner.go:195] Run: crio config
	I0916 10:57:10.904510   30972 cni.go:84] Creating CNI manager for ""
	I0916 10:57:10.904533   30972 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:57:10.904544   30972 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:57:10.904562   30972 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-244475 NodeName:ha-244475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:57:10.904725   30972 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-244475"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:57:10.904747   30972 kube-vip.go:115] generating kube-vip config ...
	I0916 10:57:10.904790   30972 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 10:57:10.917423   30972 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 10:57:10.917527   30972 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:57:10.917583   30972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:57:10.927376   30972 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:57:10.927457   30972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:57:10.938707   30972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 10:57:10.955906   30972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:57:10.973043   30972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0916 10:57:10.989862   30972 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 10:57:11.006814   30972 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:57:11.011852   30972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:57:11.157952   30972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:57:11.174884   30972 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475 for IP: 192.168.39.19
	I0916 10:57:11.174913   30972 certs.go:194] generating shared ca certs ...
	I0916 10:57:11.174930   30972 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:57:11.175071   30972 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 10:57:11.175117   30972 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 10:57:11.175127   30972 certs.go:256] generating profile certs ...
	I0916 10:57:11.175198   30972 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/client.key
	I0916 10:57:11.175224   30972 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.e17a33c4
	I0916 10:57:11.175244   30972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.e17a33c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.222 192.168.39.254]
	I0916 10:57:11.457836   30972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.e17a33c4 ...
	I0916 10:57:11.457868   30972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.e17a33c4: {Name:mkaded1790af58032a742404d0c521ea7ca24a2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:57:11.458043   30972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.e17a33c4 ...
	I0916 10:57:11.458054   30972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.e17a33c4: {Name:mk49289919fb9bd1bc4e4b2a51c25eaa7c974e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:57:11.458123   30972 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt.e17a33c4 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt
	I0916 10:57:11.458257   30972 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key.e17a33c4 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key
	I0916 10:57:11.458379   30972 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key
	I0916 10:57:11.458394   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:57:11.458406   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:57:11.458419   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:57:11.458432   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:57:11.458444   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:57:11.458455   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:57:11.458473   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:57:11.458485   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:57:11.458529   30972 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 10:57:11.458558   30972 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 10:57:11.458567   30972 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:57:11.458591   30972 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:57:11.458615   30972 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:57:11.458636   30972 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 10:57:11.458671   30972 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 10:57:11.458695   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 10:57:11.458708   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 10:57:11.458722   30972 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:11.459235   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:57:11.486915   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:57:11.513299   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:57:11.539216   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:57:11.565143   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:57:11.591276   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:57:11.693277   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:57:11.878105   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/ha-244475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:57:12.131173   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 10:57:12.248298   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 10:57:12.381238   30972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:57:12.590368   30972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:57:12.849042   30972 ssh_runner.go:195] Run: openssl version
	I0916 10:57:12.926484   30972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:57:13.011250   30972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:13.024927   30972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:13.024984   30972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:57:13.072521   30972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:57:13.099810   30972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 10:57:13.122240   30972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 10:57:13.127453   30972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 10:57:13.127520   30972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 10:57:13.139211   30972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 10:57:13.152026   30972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 10:57:13.166872   30972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 10:57:13.173825   30972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 10:57:13.173890   30972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 10:57:13.182326   30972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:57:13.196786   30972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:57:13.208500   30972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:57:13.220045   30972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:57:13.235178   30972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:57:13.248102   30972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:57:13.262748   30972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:57:13.269327   30972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:57:13.278428   30972 kubeadm.go:392] StartCluster: {Name:ha-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-244475 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:57:13.278542   30972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 10:57:13.278592   30972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:57:13.346561   30972 cri.go:89] found id: "b80ed81e3f28ec8775dcd3885c70d8f5007a2014997e592f8f4e740a62ac078e"
	I0916 10:57:13.346587   30972 cri.go:89] found id: "b985931be529027e0f20c52ce936628db26a39c32bb29fcffd406e052c83b105"
	I0916 10:57:13.346593   30972 cri.go:89] found id: "e393e2c3e91712506ad40fadf6adfbeb951eb9c35b92c0e16b7dd003dc6f4034"
	I0916 10:57:13.346598   30972 cri.go:89] found id: "65162abeb86a3a1739a5623a7817ac07aac37e6b7b477492993f1d50a0429276"
	I0916 10:57:13.346602   30972 cri.go:89] found id: "4a08ddefb7e0fe6b934866af273b3dfce1bbf395ab649d1e9ddf610180effeb3"
	I0916 10:57:13.346606   30972 cri.go:89] found id: "f2873e375d45f4c998033d55be1a7fdebdba577bff3bea729901a3e41eefa4be"
	I0916 10:57:13.346611   30972 cri.go:89] found id: "5661ec7e57a66f5aaaaf98d94c923bbbd592384c3ef24308461ca1e4380b8bbf"
	I0916 10:57:13.346614   30972 cri.go:89] found id: "392523616ed483d70489493ff810c7c00e2c3679d91ed33cace5c42354ac85af"
	I0916 10:57:13.346618   30972 cri.go:89] found id: "91491eba4d33b0a26fd738ad6f63f6deaf5b4c730f037eaf6dc4908905fde9f9"
	I0916 10:57:13.346626   30972 cri.go:89] found id: "39bee169a2affe6d509924bc94b0ee90196af6c589d0fe7eb3f18c48559c446d"
	I0916 10:57:13.346630   30972 cri.go:89] found id: "c7904b48af0d517bf275eac462d37e6f93a48771be3452312f485bacfe1b3d19"
	I0916 10:57:13.346634   30972 cri.go:89] found id: "eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23"
	I0916 10:57:13.346638   30972 cri.go:89] found id: "ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577"
	I0916 10:57:13.346646   30972 cri.go:89] found id: "6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f"
	I0916 10:57:13.346653   30972 cri.go:89] found id: "3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e"
	I0916 10:57:13.346661   30972 cri.go:89] found id: "1da90c534a1eee0538fdeec3079b247aafe60da09cbf760ae3433480a66cc95a"
	I0916 10:57:13.346665   30972 cri.go:89] found id: "268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791"
	I0916 10:57:13.346670   30972 cri.go:89] found id: "2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed"
	I0916 10:57:13.346677   30972 cri.go:89] found id: "c692c6a18e99d2673e412ffd521c76f2a0da04c77d38fec5ed2cb8cb05c3e3f6"
	I0916 10:57:13.346681   30972 cri.go:89] found id: "6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704"
	I0916 10:57:13.346688   30972 cri.go:89] found id: "034030626ec0248410f35cc75f1a050ff672de87669017788a692eecfb974eb3"
	I0916 10:57:13.346693   30972 cri.go:89] found id: "7f78c5e4a3a25c53eb3051665aed0aed847ccf5bc052d51eb080f44bbb956465"
	I0916 10:57:13.346700   30972 cri.go:89] found id: "ac63170bf5bb3e138ee2902ee9d3245bf86aad0f7cd971de1989c1c8d4b08913"
	I0916 10:57:13.346707   30972 cri.go:89] found id: "6e6d69b26d5c1088c047248a212bd53f5e13828d4e0a7c2156e0ddc88ac76ccf"
	I0916 10:57:13.346721   30972 cri.go:89] found id: "308650af833f609d658eacf1b2b1c8baa29f17b904e878b9f27e62fe4fb823d3"
	I0916 10:57:13.346727   30972 cri.go:89] found id: ""
	I0916 10:57:13.346778   30972 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.540744454Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6d77b2601aaa01f099158b0ae18d96940a6fa4458999b96355d38545ed62bcf4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726484233879839320,StartedAt:1726484233965574727,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip:v0.8.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7d14d8f4abb76f867ab3a64246ef25cb/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7d14d8f4abb76f867ab3a64246ef25cb/containers/kube-vip/760a758a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/admin.conf,HostPath:/etc/kubernetes/admin.conf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-vip-ha-244475_7d14d8f4abb76f867ab3a64246ef25cb/kube-vip/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000
,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5b08fd13-ed7d-460d-945a-bc550cf7b1a1 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.541740128Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:32d3a5cdb5fe3b2f51ebd673b58afaf704a6afc7561267fc0d30b43aee746851,Verbose:false,}" file="otel-collector/interceptors.go:62" id=afd66b4c-cc98-43eb-a72d-b99c14d91351 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.541889137Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:32d3a5cdb5fe3b2f51ebd673b58afaf704a6afc7561267fc0d30b43aee746851,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726484233866727689,StartedAt:1726484233925368583,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0c8cad04-2c64-42f9-85e2-5e4fbfe7961d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0c8cad04-2c64-42f9-85e2-5e4fbfe7961d/containers/kube-proxy/7178bc40,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kub
elet/pods/0c8cad04-2c64-42f9-85e2-5e4fbfe7961d/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0c8cad04-2c64-42f9-85e2-5e4fbfe7961d/volumes/kubernetes.io~projected/kube-api-access-sc2xv,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-crttt_0c8cad04-2c64-42f9-85e2-5e4fbfe7961d/kube-proxy/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collecto
r/interceptors.go:74" id=afd66b4c-cc98-43eb-a72d-b99c14d91351 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.542488509Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b80ed81e3f28ec8775dcd3885c70d8f5007a2014997e592f8f4e740a62ac078e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e2ae387a-393f-43e8-8e76-42f9e9831b17 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.542639658Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b80ed81e3f28ec8775dcd3885c70d8f5007a2014997e592f8f4e740a62ac078e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726484233145027860,StartedAt:1726484233197784742,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/51962d07-f38a-4db3-86ee-af3d954dbec6/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/51962d07-f38a-4db3-86ee-af3d954dbec6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/51962d07-f38a-4db3-86ee-af3d954dbec6/containers/coredns/546aa45c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/51962d07-f38a-4db3-86ee-af3d954dbec6/volumes/kubernetes.io~projected/kube-api-access-vdkkf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-lzrg2_51962d07-f38a-4db3-86ee-af3d954dbec6/coredns/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e2ae387a-393f-43e8-8e76-42f9e9831b17 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.543041888Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b985931be529027e0f20c52ce936628db26a39c32bb29fcffd406e052c83b105,Verbose:false,}" file="otel-collector/interceptors.go:62" id=75993649-3043-402a-8b12-d9e26eb3642f name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.543176006Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b985931be529027e0f20c52ce936628db26a39c32bb29fcffd406e052c83b105,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726484232987780909,StartedAt:1726484233054646714,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240813-c6f155d6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/764ade4d-cbcd-42b8-9d68-b4ed502de9eb/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/764ade4d-cbcd-42b8-9d68-b4ed502de9eb/containers/kindnet-cni/13347b09,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath:/etc/c
ni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/764ade4d-cbcd-42b8-9d68-b4ed502de9eb/volumes/kubernetes.io~projected/kube-api-access-kzw6d,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-7v2cl_764ade4d-cbcd-42b8-9d68-b4ed502de9eb/kindnet-cni/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=75993649-3043-402a-8b12-d9e26eb3642f
name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.543561129Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e393e2c3e91712506ad40fadf6adfbeb951eb9c35b92c0e16b7dd003dc6f4034,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4d086f38-aa96-4385-b9aa-fb58034dc0cc name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.543651452Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e393e2c3e91712506ad40fadf6adfbeb951eb9c35b92c0e16b7dd003dc6f4034,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726484232736593072,StartedAt:1726484232994621589,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/fc549709-ddc0-4684-b377-46d33ef8f03d/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/fc549709-ddc0-4684-b377-46d33ef8f03d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/fc549709-ddc0-4684-b377-46d33ef8f03d/containers/coredns/18fcd86b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/fc549709-ddc0-4684-b377-46d33ef8f03d/volumes/kubernetes.io~projected/kube-api-access-tqb26,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-m8fd7_fc549709-ddc0-4684-b377-46d33ef8f03d/coredns/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4d086f38-aa96-4385-b9aa-fb58034dc0cc name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.544050290Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:65162abeb86a3a1739a5623a7817ac07aac37e6b7b477492993f1d50a0429276,Verbose:false,}" file="otel-collector/interceptors.go:62" id=db8379bc-2654-46b3-9835-633c0391b2c4 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.544146457Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:65162abeb86a3a1739a5623a7817ac07aac37e6b7b477492993f1d50a0429276,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726484232519669914,StartedAt:1726484232828665899,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/520edd0e46592c17928a302783a221a2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/520edd0e46592c17928a302783a221a2/containers/etcd/973e6e99,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-244475_520ed
d0e46592c17928a302783a221a2/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=db8379bc-2654-46b3-9835-633c0391b2c4 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.544596536Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4a08ddefb7e0fe6b934866af273b3dfce1bbf395ab649d1e9ddf610180effeb3,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3f773bd5-bb46-419c-b82e-c1600fc00597 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.544717610Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4a08ddefb7e0fe6b934866af273b3dfce1bbf395ab649d1e9ddf610180effeb3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726484232487106492,StartedAt:1726484232739567220,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/caad457f3675fcf5fa9c2e121ebd3a2a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/caad457f3675fcf5fa9c2e121ebd3a2a/containers/kube-scheduler/0b4afa93,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-244475_caad457f3675fcf5fa9c2e121ebd3a2a/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,
CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3f773bd5-bb46-419c-b82e-c1600fc00597 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.554211595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45688044-9983-483c-99bb-d59d0ce5c80b name=/runtime.v1.RuntimeService/Version
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.554270443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45688044-9983-483c-99bb-d59d0ce5c80b name=/runtime.v1.RuntimeService/Version
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.555460850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f54d740-162a-4371-8823-1940ace925b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.556043772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484396556022261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f54d740-162a-4371-8823-1940ace925b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.556786409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0adc8d4-24f4-4a93-8771-50cc4719d125 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.556868096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0adc8d4-24f4-4a93-8771-50cc4719d125 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.557278162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ab1af10083183e28afb7804c1d99337408bcd3029f430da8ac46ca54db45222,PodSandboxId:08c21e88e3af3b033860ff203352e3b9150bfadd0a9bcd9edfb198989a9f8dc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484336599707228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f654d72c48c6eb63e165020fa04c0f9b664cab6544315564793ef9925da7b4fd,PodSandboxId:1997ae70c139b8271c12dd188372c8969463d6e3d5f91251a6ddd5252fc4d3d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484336606834037,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126a5b17b3e86e7df60cd69e11695d9d0f9f52d0712bffb662c8a47a1eda2850,PodSandboxId:d55d8866b721e1cd1eb49dd06d2e4d8ae0086f42fa930eb71075d4436de267e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484335595445368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91997aca89e1801b8c908ae4cc12f95b1a824f180f958e2161affcf348a6957e,PodSandboxId:6ed7e29d8fde2ebf9ff92e687dd63beb8b578a0fac8017d67902fd2d0be10b8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484265665115431,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d77b2601aaa01f099158b0ae18d96940a6fa4458999b96355d38545ed62bcf4,PodSandboxId:a76f8f6b93994eb84df8da539b5249c316a255fe06b223346590e0c7f07d8f24,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726484232781795578,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d3a5cdb5fe3b2f51ebd673b58afaf704a6afc7561267fc0d30b43aee746851,PodSandboxId:1b74898ff1d3ca0b1aa21328ecdccde523718bf655d955b946286edf573dc838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484232893786709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b80ed81e3f28ec8775dcd3885c70d8f5007a2014997e592f8f4e740a62ac078e,PodSandboxId:7c0712ccc8230e47ce792a9f506a06bdfae1beb7440893c077689abcd2201378,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484232700490066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b985931be529027e0f20c52ce936628db26a39c32bb29fcffd406e052c83b105,PodSandboxId:227aa3b185b8cbc99f3b6ac5c1ca9cd851ecd68b5473295c9ce6e8810814843b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484232527565121,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e393e2c3e91712506ad40fadf6adfbeb951eb9c35b92c0e16b7dd003dc6f4034,PodSandboxId:c7e85d47a7c967b2313cc6d7fff9fc75be4932b32c111a11bb49533908801fcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484232453373586,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65162abeb86a3a1739a5623a7817ac07aac37e6b7b477492993f1d50a0429276,PodSandboxId:a9ee368aed1403f2adb29161935f888e170dd23d9ccaa2d0bc3bd557aa2f7df8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484232395984133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a08ddefb7e0fe6b934866af273b3dfce1bbf395ab649d1e9ddf610180effeb3,PodSandboxId:8de6aa798726b12a5d1450815eede9f276c20d04d652ad4478144652e6eac058,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484232310621366,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3
675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5661ec7e57a66f5aaaaf98d94c923bbbd592384c3ef24308461ca1e4380b8bbf,PodSandboxId:1997ae70c139b8271c12dd188372c8969463d6e3d5f91251a6ddd5252fc4d3d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484232162431001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485
b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2873e375d45f4c998033d55be1a7fdebdba577bff3bea729901a3e41eefa4be,PodSandboxId:08c21e88e3af3b033860ff203352e3b9150bfadd0a9bcd9edfb198989a9f8dc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484232181239561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d21147
9d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f00a03475d073179332fbe79f2ed10d286e0c6bedf1861a8230d9919cde4a27,PodSandboxId:1eaacb088bf941e279ad42af3f72b19bf2790643fcde4457d0b5ff745a4806e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726483823856971748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eff3d4b6ef1bb809c70e9089cf36c79f9cbea99823ed2b5055995b0e0abd6b23,PodSandboxId:6203d6a2f83f4a08dbecdceab914bea8099f1c8fa17a9e052e272e18cde86fb1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726483805692622503,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f,PodSandboxId:9ec606e5b45f08386cc4365f26f74fdcf01f39702536a53491cddf4af70506d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726483790773411505,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCou
nt: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577,PodSandboxId:8cacdb30939e87fda706d3726485b9fd18466765801925d01faf0c5093af92aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483790780411020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791,PodSandboxId:194b56870a94accad5384a0c2f540bdec2d56f4cb27e5d45522ec884b22763ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726483790578442977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed,PodSandboxId:c308ac1286c4c4d42beec7706420c33bee377b409e802753a17cd320bfcd7339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726483790538356809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe796
1d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e,PodSandboxId:2305599c1317db9a583219dd7c98f87d6b8e759777a72ad229b6e60545f20bb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726483790689220183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704,PodSandboxId:bd9f73d3e8d5591b4d001266e01c13cbae68b0ecd62f9cea242eeff6615c6274,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726483790366771309,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0adc8d4-24f4-4a93-8771-50cc4719d125 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.582789405Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=114e4ae3-556f-4470-8c51-3b649d788666 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.583176490Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6ed7e29d8fde2ebf9ff92e687dd63beb8b578a0fac8017d67902fd2d0be10b8f,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-d4m5s,Uid:6c479ead-4e77-41ca-9e2e-5cd7dc781761,Namespace:default,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484265460428130,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:41:27.480703141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d55d8866b721e1cd1eb49dd06d2e4d8ae0086f42fa930eb71075d4436de267e5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2e1264f7-2197-4821-8238-82fac849b145,Namespace:kube-system,Attempt:2,},State:SANDB
OX_READY,CreatedAt:1726484238604190466,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e1264f7-2197-4821-8238-82fac849b145,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"t
ype\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T10:39:09.499691578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7e85d47a7c967b2313cc6d7fff9fc75be4932b32c111a11bb49533908801fcf,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8fd7,Uid:fc549709-ddc0-4684-b377-46d33ef8f03d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231845153359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:39:09.487465959Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7c0712ccc8230e47ce792a9f506a06bdfae1beb7440893c077689abcd2201378,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lzrg2,Uid:51962d07-f38a-4db3-86ee-af3d954dbec6,Namespace:kube-system,Att
empt:2,},State:SANDBOX_READY,CreatedAt:1726484231782921148,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:39:09.496113367Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8de6aa798726b12a5d1450815eede9f276c20d04d652ad4478144652e6eac058,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-244475,Uid:caad457f3675fcf5fa9c2e121ebd3a2a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231772354459,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caad457f3675fcf5fa9c2e121ebd3a2a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.h
ash: caad457f3675fcf5fa9c2e121ebd3a2a,kubernetes.io/config.seen: 2024-09-16T10:38:52.514064274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:227aa3b185b8cbc99f3b6ac5c1ca9cd851ecd68b5473295c9ce6e8810814843b,Metadata:&PodSandboxMetadata{Name:kindnet-7v2cl,Uid:764ade4d-cbcd-42b8-9d68-b4ed502de9eb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231758671397,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:38:57.245484492Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9ee368aed1403f2adb29161935f888e170dd23d9ccaa2d0bc3bd557aa2f7df8,Metadata:&PodSandboxMetadata{Name:etcd-ha-244475,Uid:520edd0e46592c17928a302783a221a2,Namespace
:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231751691301,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.19:2379,kubernetes.io/config.hash: 520edd0e46592c17928a302783a221a2,kubernetes.io/config.seen: 2024-09-16T10:38:52.514057254Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08c21e88e3af3b033860ff203352e3b9150bfadd0a9bcd9edfb198989a9f8dc6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-244475,Uid:dcc439ebdfb1c8eb0ac4d211479d24ca,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231740346102,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.19:8443,kubernetes.io/config.hash: dcc439ebdfb1c8eb0ac4d211479d24ca,kubernetes.io/config.seen: 2024-09-16T10:38:52.514061824Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b74898ff1d3ca0b1aa21328ecdccde523718bf655d955b946286edf573dc838,Metadata:&PodSandboxMetadata{Name:kube-proxy-crttt,Uid:0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231688775965,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T10:38:57.241580111Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:1997ae70c139b8271c12dd188372c8969463d6e3d5f91251a6ddd5252fc4d3d5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-244475,Uid:0485b752bb66b84c639fb8d5b648be4a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726484231669465404,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0485b752bb66b84c639fb8d5b648be4a,kubernetes.io/config.seen: 2024-09-16T10:38:52.514063070Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a76f8f6b93994eb84df8da539b5249c316a255fe06b223346590e0c7f07d8f24,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-244475,Uid:7d14d8f4abb76f867ab3a64246ef25cb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726484231662995314,Labels
:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{kubernetes.io/config.hash: 7d14d8f4abb76f867ab3a64246ef25cb,kubernetes.io/config.seen: 2024-09-16T10:49:43.326419700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=114e4ae3-556f-4470-8c51-3b649d788666 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.584830601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b113dc3-7f07-498e-8ba9-a9c8858558be name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.584895785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b113dc3-7f07-498e-8ba9-a9c8858558be name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 10:59:56 ha-244475 crio[6426]: time="2024-09-16 10:59:56.585113690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ab1af10083183e28afb7804c1d99337408bcd3029f430da8ac46ca54db45222,PodSandboxId:08c21e88e3af3b033860ff203352e3b9150bfadd0a9bcd9edfb198989a9f8dc6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484336599707228,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc439ebdfb1c8eb0ac4d211479d24ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f654d72c48c6eb63e165020fa04c0f9b664cab6544315564793ef9925da7b4fd,PodSandboxId:1997ae70c139b8271c12dd188372c8969463d6e3d5f91251a6ddd5252fc4d3d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484336606834037,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0485b752bb66b84c639fb8d5b648be4a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91997aca89e1801b8c908ae4cc12f95b1a824f180f958e2161affcf348a6957e,PodSandboxId:6ed7e29d8fde2ebf9ff92e687dd63beb8b578a0fac8017d67902fd2d0be10b8f,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484265665115431,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d4m5s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c479ead-4e77-41ca-9e2e-5cd7dc781761,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d77b2601aaa01f099158b0ae18d96940a6fa4458999b96355d38545ed62bcf4,PodSandboxId:a76f8f6b93994eb84df8da539b5249c316a255fe06b223346590e0c7f07d8f24,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726484232781795578,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d14d8f4abb76f867ab3a64246ef25cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d3a5cdb5fe3b2f51ebd673b58afaf704a6afc7561267fc0d30b43aee746851,PodSandboxId:1b74898ff1d3ca0b1aa21328ecdccde523718bf655d955b946286edf573dc838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484232893786709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-crttt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8cad04-2c64-42f9-85e2-5e4fbfe7961d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:b80ed81e3f28ec8775dcd3885c70d8f5007a2014997e592f8f4e740a62ac078e,PodSandboxId:7c0712ccc8230e47ce792a9f506a06bdfae1beb7440893c077689abcd2201378,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484232700490066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lzrg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51962d07-f38a-4db3-86ee-af3d954dbec6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b985931be529027e0f20c52ce936628db26a39c32bb29fcffd406e052c83b105,PodSandboxId:227aa3b185b8cbc99f3b6ac5c1ca9cd851ecd68b5473295c9ce6e8810814843b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484232527565121,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7v2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764ade4d-cbcd-42b8-9d68-b4ed502de9eb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e393e2c3e91712506ad40fadf6adfbeb951eb9c35b92c0e16b7dd003dc6f4034,PodSandboxId:c7e85d47a7c967b2313cc6d7fff9fc75be4932b32c111a11bb49533908801fcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484232453373586,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8fd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc549709-ddc0-4684-b377-46d33ef8f03d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dn
s\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65162abeb86a3a1739a5623a7817ac07aac37e6b7b477492993f1d50a0429276,PodSandboxId:a9ee368aed1403f2adb29161935f888e170dd23d9ccaa2d0bc3bd557aa2f7df8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484232395984133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-244475,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 520edd0e46592c17928a302783a221a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a08ddefb7e0fe6b934866af273b3dfce1bbf395ab649d1e9ddf610180effeb3,PodSandboxId:8de6aa798726b12a5d1450815eede9f276c20d04d652ad4478144652e6eac058,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484232310621366,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-244475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: caad457f3675fcf5fa9c2e121ebd3a2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b113dc3-7f07-498e-8ba9-a9c8858558be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f654d72c48c6e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Running             kube-controller-manager   5                   1997ae70c139b       kube-controller-manager-ha-244475
	1ab1af1008318       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Running             kube-apiserver            5                   08c21e88e3af3       kube-apiserver-ha-244475
	126a5b17b3e86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       5                   d55d8866b721e       storage-provisioner
	91997aca89e18       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   2 minutes ago        Running             busybox                   2                   6ed7e29d8fde2       busybox-7dff88458-d4m5s
	32d3a5cdb5fe3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   2 minutes ago        Running             kube-proxy                2                   1b74898ff1d3c       kube-proxy-crttt
	6d77b2601aaa0       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   2 minutes ago        Running             kube-vip                  1                   a76f8f6b93994       kube-vip-ha-244475
	b80ed81e3f28e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 minutes ago        Running             coredns                   2                   7c0712ccc8230       coredns-7c65d6cfc9-lzrg2
	b985931be5290       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   2 minutes ago        Running             kindnet-cni               2                   227aa3b185b8c       kindnet-7v2cl
	e393e2c3e9171       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 minutes ago        Running             coredns                   2                   c7e85d47a7c96       coredns-7c65d6cfc9-m8fd7
	65162abeb86a3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   2 minutes ago        Running             etcd                      2                   a9ee368aed140       etcd-ha-244475
	4a08ddefb7e0f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   2 minutes ago        Running             kube-scheduler            2                   8de6aa798726b       kube-scheduler-ha-244475
	f2873e375d45f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   2 minutes ago        Exited              kube-apiserver            4                   08c21e88e3af3       kube-apiserver-ha-244475
	5661ec7e57a66       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   2 minutes ago        Exited              kube-controller-manager   4                   1997ae70c139b       kube-controller-manager-ha-244475
	2f00a03475d07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   9 minutes ago        Exited              busybox                   1                   1eaacb088bf94       busybox-7dff88458-d4m5s
	eff3d4b6ef1bb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   9 minutes ago        Exited              kube-vip                  0                   6203d6a2f83f4       kube-vip-ha-244475
	ba907061155c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 minutes ago       Exited              coredns                   1                   8cacdb30939e8       coredns-7c65d6cfc9-lzrg2
	6dd41088c8229       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   10 minutes ago       Exited              kindnet-cni               1                   9ec606e5b45f0       kindnet-7v2cl
	3a6f1aac71418       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 minutes ago       Exited              coredns                   1                   2305599c1317d       coredns-7c65d6cfc9-m8fd7
	268d2527b9c98       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   10 minutes ago       Exited              etcd                      1                   194b56870a94a       etcd-ha-244475
	2ef7bc6ba1708       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   10 minutes ago       Exited              kube-proxy                1                   c308ac1286c4c       kube-proxy-crttt
	6c0110ceab6a6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   10 minutes ago       Exited              kube-scheduler            1                   bd9f73d3e8d55       kube-scheduler-ha-244475
	
	
	==> coredns [3a6f1aac71418bcb033aee0c5b867eeb6bf5323876a35c8db9ce1c1f13caf83e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:48952->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b80ed81e3f28ec8775dcd3885c70d8f5007a2014997e592f8f4e740a62ac078e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ba907061155c7d07bb2ece7dd40deebec6a7b385674f721aa5faa111ad73a577] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57916->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34986->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e393e2c3e91712506ad40fadf6adfbeb951eb9c35b92c0e16b7dd003dc6f4034] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1062757933]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:57:45.443) (total time: 11443ms):
	Trace[1062757933]: ---"Objects listed" error:Unauthorized 11442ms (10:57:56.886)
	Trace[1062757933]: [11.443522678s] [11.443522678s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-244475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_38_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:38:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:59:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:59:03 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:59:03 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:59:03 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:59:03 +0000   Mon, 16 Sep 2024 10:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-244475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8707c2bcd2ba47818dfac2382d400cf1
	  System UUID:                8707c2bc-d2ba-4781-8dfa-c2382d400cf1
	  Boot ID:                    174ade31-14cd-4b32-9050-92f81ba6b3e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d4m5s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-lzrg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-7c65d6cfc9-m8fd7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-ha-244475                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-7v2cl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-244475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-244475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-crttt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-244475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-244475                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 115s                  kube-proxy       
	  Normal   Starting                 20m                   kube-proxy       
	  Normal   Starting                 9m23s                 kube-proxy       
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)     kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)     kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)     kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  21m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           21m                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           20m                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           18m                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           9m25s                 node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           9m1s                  node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           7m52s                 node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   NodeNotReady             6m25s                 node-controller  Node ha-244475 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    6m1s (x2 over 21m)    kubelet          Node ha-244475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m1s (x2 over 21m)    kubelet          Node ha-244475 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  6m1s (x2 over 21m)    kubelet          Node ha-244475 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                6m1s (x2 over 20m)    kubelet          Node ha-244475 status is now: NodeReady
	  Warning  ContainerGCFailed        3m5s (x3 over 11m)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m56s (x10 over 11m)  kubelet          Node ha-244475 status is now: NodeNotReady
	  Normal   RegisteredNode           60s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	  Normal   RegisteredNode           55s                   node-controller  Node ha-244475 event: Registered Node ha-244475 in Controller
	
	
	Name:               ha-244475-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_39_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:39:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:59:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:59:00 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:59:00 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:59:00 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:59:00 +0000   Mon, 16 Sep 2024 10:50:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-244475-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfb45c96351d4aafade2443c380b5343
	  System UUID:                bfb45c96-351d-4aaf-ade2-443c380b5343
	  Boot ID:                    d493ff2b-8d16-4f12-976a-cc277283240e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t6fmb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-244475-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-xvp82                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-244475-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-244475-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-t454b                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-244475-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-244475-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 8m48s                  kube-proxy       
	  Normal  Starting                 20m                    kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           20m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           20m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-244475-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m49s (x8 over 9m49s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m49s (x8 over 9m49s)  kubelet          Node ha-244475-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m49s (x7 over 9m49s)  kubelet          Node ha-244475-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m25s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           9m1s                   node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           7m52s                  node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  NodeNotReady             2m11s                  kubelet          Node ha-244475-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           60s                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	  Normal  RegisteredNode           55s                    node-controller  Node ha-244475-m02 event: Registered Node ha-244475-m02 in Controller
	
	
	Name:               ha-244475-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-244475-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-244475
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_42_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-244475-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:59:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:59:35 +0000   Mon, 16 Sep 2024 10:59:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:59:35 +0000   Mon, 16 Sep 2024 10:59:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:59:35 +0000   Mon, 16 Sep 2024 10:59:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:59:35 +0000   Mon, 16 Sep 2024 10:59:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-244475-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42083a2d4bb24e16b292c8834cbe5824
	  System UUID:                42083a2d-4bb2-4e16-b292-c8834cbe5824
	  Boot ID:                    5008682f-493d-4f9d-b6e3-6973d783ebab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2v2jd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kindnet-dflt4              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-proxy-kp7hv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 18s                    kube-proxy       
	  Normal   Starting                 7m16s                  kube-proxy       
	  Normal   NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           9m1s                   node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   NodeReady                7m21s                  kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 7m21s                  kubelet          Node ha-244475-m04 has been rebooted, boot id: 17ea4c88-a812-44b1-a1ac-94e19366fcfe
	  Normal   NodeHasSufficientMemory  7m21s (x2 over 7m21s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m21s (x2 over 7m21s)  kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m21s (x2 over 7m21s)  kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal   NodeNotReady             6m20s (x2 over 8m45s)  node-controller  Node ha-244475-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           60s                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   RegisteredNode           55s                    node-controller  Node ha-244475-m04 event: Registered Node ha-244475-m04 in Controller
	  Normal   Starting                 22s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node ha-244475-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node ha-244475-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node ha-244475-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 22s                    kubelet          Node ha-244475-m04 has been rebooted, boot id: 5008682f-493d-4f9d-b6e3-6973d783ebab
	  Normal   NodeReady                22s                    kubelet          Node ha-244475-m04 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.087420] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.371465] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.620280] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:49] systemd-fstab-generator[3624]: Ignoring "noauto" option for root device
	[  +0.157093] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.177936] systemd-fstab-generator[3650]: Ignoring "noauto" option for root device
	[  +0.142086] systemd-fstab-generator[3662]: Ignoring "noauto" option for root device
	[  +0.308892] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +5.722075] systemd-fstab-generator[3786]: Ignoring "noauto" option for root device
	[  +0.089630] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.518650] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 10:50] kauditd_printk_skb: 85 callbacks suppressed
	[  +6.619080] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.373360] kauditd_printk_skb: 5 callbacks suppressed
	[Sep16 10:57] systemd-fstab-generator[6348]: Ignoring "noauto" option for root device
	[  +0.163284] systemd-fstab-generator[6360]: Ignoring "noauto" option for root device
	[  +0.178723] systemd-fstab-generator[6374]: Ignoring "noauto" option for root device
	[  +0.156188] systemd-fstab-generator[6386]: Ignoring "noauto" option for root device
	[  +0.295428] systemd-fstab-generator[6414]: Ignoring "noauto" option for root device
	[  +7.284917] systemd-fstab-generator[6525]: Ignoring "noauto" option for root device
	[  +0.085766] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.554149] kauditd_printk_skb: 103 callbacks suppressed
	[Sep16 10:58] kauditd_printk_skb: 5 callbacks suppressed
	[Sep16 10:59] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [268d2527b9c98518a83f871d1bd34bfb7b48d7a8bc4732dac191726f6e049791] <==
	{"level":"info","ts":"2024-09-16T10:55:27.758022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 [term 3] starts to transfer leadership to f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:55:27.758050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 sends MsgTimeoutNow to f0e3021c7d1d789a immediately as f0e3021c7d1d789a already has up-to-date log"}
	{"level":"info","ts":"2024-09-16T10:55:27.760733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 [term: 3] received a MsgVote message with higher term from f0e3021c7d1d789a [term: 4]"}
	{"level":"info","ts":"2024-09-16T10:55:27.760820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became follower at term 4"}
	{"level":"info","ts":"2024-09-16T10:55:27.760836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 [logterm: 3, index: 3775, vote: 0] cast MsgVote for f0e3021c7d1d789a [logterm: 3, index: 3775] at term 4"}
	{"level":"info","ts":"2024-09-16T10:55:27.760847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 lost leader 683e1d26ac7e3123 at term 4"}
	{"level":"info","ts":"2024-09-16T10:55:27.763096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader f0e3021c7d1d789a at term 4"}
	{"level":"info","ts":"2024-09-16T10:55:27.858845Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"683e1d26ac7e3123","old-leader-member-id":"683e1d26ac7e3123","new-leader-member-id":"f0e3021c7d1d789a","took":"100.904127ms"}
	{"level":"info","ts":"2024-09-16T10:55:27.859336Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"warn","ts":"2024-09-16T10:55:27.862067Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:55:27.862149Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"warn","ts":"2024-09-16T10:55:27.863195Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:55:27.863255Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:55:27.863387Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"warn","ts":"2024-09-16T10:55:27.864072Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T10:55:27.864154Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f0e3021c7d1d789a","error":"failed to read f0e3021c7d1d789a on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T10:55:27.865615Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"warn","ts":"2024-09-16T10:55:27.865776Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a","error":"context canceled"}
	{"level":"info","ts":"2024-09-16T10:55:27.865818Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:55:27.865831Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"warn","ts":"2024-09-16T10:55:27.868054Z","caller":"rafthttp/http.go:413","msg":"failed to find remote peer in cluster","local-member-id":"683e1d26ac7e3123","remote-peer-id-stream-handler":"683e1d26ac7e3123","remote-peer-id-from":"f0e3021c7d1d789a","cluster-id":"3f32d84448c0bab8"}
	{"level":"info","ts":"2024-09-16T10:55:27.870189Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"warn","ts":"2024-09-16T10:55:27.870340Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.222:34548","server-name":"","error":"read tcp 192.168.39.19:2380->192.168.39.222:34548: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:55:28.408062Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2024-09-16T10:55:28.408174Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-244475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	
	
	==> etcd [65162abeb86a3a1739a5623a7817ac07aac37e6b7b477492993f1d50a0429276] <==
	{"level":"info","ts":"2024-09-16T10:58:47.483301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:47.483330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:47.483364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 [logterm: 6, index: 3779] sent MsgPreVote request to f0e3021c7d1d789a at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:48.168246Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:58:48.168399Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:58:48.173655Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:58:48.178715Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"f0e3021c7d1d789a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:58:48.178772Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:58:48.182647Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"683e1d26ac7e3123","to":"f0e3021c7d1d789a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:58:48.182767Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"683e1d26ac7e3123","remote-peer-id":"f0e3021c7d1d789a"}
	{"level":"info","ts":"2024-09-16T10:58:48.483656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:48.483777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:48.483820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:48.483852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 [logterm: 6, index: 3779] sent MsgPreVote request to f0e3021c7d1d789a at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:48.485224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from f0e3021c7d1d789a at term 6"}
	{"level":"info","ts":"2024-09-16T10:58:48.485328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-16T10:58:48.485377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 7"}
	{"level":"info","ts":"2024-09-16T10:58:48.485406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 7"}
	{"level":"info","ts":"2024-09-16T10:58:48.485435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 [logterm: 6, index: 3779] sent MsgVote request to f0e3021c7d1d789a at term 7"}
	{"level":"info","ts":"2024-09-16T10:58:48.493589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from f0e3021c7d1d789a at term 7"}
	{"level":"info","ts":"2024-09-16T10:58:48.493697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-09-16T10:58:48.493771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 7"}
	{"level":"info","ts":"2024-09-16T10:58:48.493803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 7"}
	{"level":"warn","ts":"2024-09-16T10:58:48.810107Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f0e3021c7d1d789a","rtt":"0s","error":"dial tcp 192.168.39.222:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:58:48.810180Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f0e3021c7d1d789a","rtt":"0s","error":"dial tcp 192.168.39.222:2380: connect: connection refused"}
	
	
	==> kernel <==
	 10:59:57 up 21 min,  0 users,  load average: 0.39, 0.46, 0.35
	Linux ha-244475 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6dd41088c822947b73a621121e25131324d1b60468f8da2736a5e5abd945b74f] <==
	I0916 10:54:42.120299       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:54:52.112958       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:54:52.113065       1 main.go:299] handling current node
	I0916 10:54:52.113093       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:54:52.113099       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:54:52.113252       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:54:52.113277       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:02.113249       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:55:02.113295       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:02.113438       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:55:02.113445       1 main.go:299] handling current node
	I0916 10:55:02.113455       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:55:02.113459       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:12.114724       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:55:12.114843       1 main.go:299] handling current node
	I0916 10:55:12.114872       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:55:12.114889       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:12.115021       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:55:12.115042       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:22.117464       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:55:22.117694       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:55:22.117925       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:55:22.117966       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:55:22.118053       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:55:22.118079       1 main.go:299] handling current node
	
	
	==> kindnet [b985931be529027e0f20c52ce936628db26a39c32bb29fcffd406e052c83b105] <==
	I0916 10:57:13.383870       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:57:14.046650       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	W0916 10:57:15.514427       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.45:52796->10.96.0.1:443: read: connection reset by peer
	E0916 10:57:15.514981       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.45:52796->10.96.0.1:443: read: connection reset by peer
	W0916 10:57:17.061927       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:57:17.062068       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:57:28.664959       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0916 10:57:28.665036       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	W0916 10:57:34.052235       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:57:34.052342       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:57:46.310829       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:57:46.311137       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:58:05.113895       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:58:05.114150       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:58:54.560434       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:58:54.560592       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:59:54.055681       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0916 10:59:54.055821       1 main.go:322] Node ha-244475-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:54.056121       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0916 10:59:54.056159       1 main.go:322] Node ha-244475-m04 has CIDR [10.244.3.0/24] 
	I0916 10:59:54.056243       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0916 10:59:54.056271       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1ab1af10083183e28afb7804c1d99337408bcd3029f430da8ac46ca54db45222] <==
	I0916 10:58:58.696289       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0916 10:58:58.781096       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:58:58.781194       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:58:58.781277       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:58:58.781301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:58:58.782557       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:58:58.782826       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:58:58.785407       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:58:58.785621       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:58:58.802358       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:58:58.802607       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:58:58.802685       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:58:58.802778       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:58:58.802813       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:58:58.804607       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:58:58.804795       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:58:58.804825       1 policy_source.go:224] refreshing policies
	W0916 10:58:58.834990       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.222]
	I0916 10:58:58.837285       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:58:58.846039       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 10:58:58.853749       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 10:58:58.880201       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:58:58.880201       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:58:59.690454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:59:00.068843       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.222]
	
	
	==> kube-apiserver [f2873e375d45f4c998033d55be1a7fdebdba577bff3bea729901a3e41eefa4be] <==
	W0916 10:58:10.879807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityClass: etcdserver: request timed out
	E0916 10:58:10.879833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: etcdserver: request timed out" logger="UnhandledError"
	W0916 10:58:10.879865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: etcdserver: request timed out
	E0916 10:58:10.879890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: etcdserver: request timed out" logger="UnhandledError"
	W0916 10:58:10.879915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.FlowSchema: etcdserver: request timed out
	E0916 10:58:10.879937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.FlowSchema: failed to list *v1.FlowSchema: etcdserver: request timed out" logger="UnhandledError"
	W0916 10:58:10.879962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: etcdserver: request timed out
	E0916 10:58:10.879968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: etcdserver: request timed out" logger="UnhandledError"
	W0916 10:58:10.879991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicyBinding: etcdserver: request timed out
	E0916 10:58:10.880017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicyBinding: failed to list *v1.ValidatingAdmissionPolicyBinding: etcdserver: request timed out" logger="UnhandledError"
	W0916 10:58:10.880058       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0916 10:58:10.880105       1 hooks.go:210] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	E0916 10:58:10.902298       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0916 10:58:10.902390       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0916 10:58:10.902456       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0916 10:58:10.901689       1 reflector.go:561] storage/cacher.go:/endpointslices: failed to list *discovery.EndpointSlice: etcdserver: request timed out
	E0916 10:58:10.959287       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0916 10:58:10.959264       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0916 10:58:10.959294       1 cacher.go:478] cacher (endpointslices.discovery.k8s.io): unexpected ListAndWatch error: failed to list *discovery.EndpointSlice: etcdserver: request timed out; reinitializing...
	W0916 10:58:10.901778       1 reflector.go:561] storage/cacher.go:/csistoragecapacities: failed to list *storage.CSIStorageCapacity: etcdserver: request timed out
	W0916 10:58:10.902258       1 reflector.go:561] storage/cacher.go:/validatingadmissionpolicies: failed to list *admissionregistration.ValidatingAdmissionPolicy: etcdserver: request timed out
	W0916 10:58:10.902366       1 reflector.go:561] storage/cacher.go:/ingress: failed to list *networking.Ingress: etcdserver: request timed out
	W0916 10:58:10.902417       1 reflector.go:561] storage/cacher.go:/certificatesigningrequests: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out
	W0916 10:58:10.902436       1 reflector.go:561] storage/cacher.go:/replicasets: failed to list *apps.ReplicaSet: etcdserver: request timed out
	W0916 10:58:10.902569       1 reflector.go:561] storage/cacher.go:/validatingwebhookconfigurations: failed to list *admissionregistration.ValidatingWebhookConfiguration: etcdserver: request timed out
	
	
	==> kube-controller-manager [5661ec7e57a66f5aaaaf98d94c923bbbd592384c3ef24308461ca1e4380b8bbf] <==
	I0916 10:57:14.090920       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:57:14.783152       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:57:14.783262       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:57:14.790794       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:57:14.791392       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:57:14.791412       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:57:14.791431       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:57:25.891043       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [f654d72c48c6eb63e165020fa04c0f9b664cab6544315564793ef9925da7b4fd] <==
	I0916 10:59:02.237590       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:59:02.264108       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0916 10:59:02.268343       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:59:02.276419       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:59:02.312148       1 shared_informer.go:320] Caches are synced for job
	I0916 10:59:02.692004       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:59:02.692046       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:59:02.708993       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:59:03.199388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475"
	I0916 10:59:31.358902       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2clmh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2clmh\": the object has been modified; please apply your changes to the latest version and try again"
	I0916 10:59:31.359690       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"898f26d7-6b1e-4c76-924a-7be5818143ba", APIVersion:"v1", ResourceVersion:"288", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2clmh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2clmh": the object has been modified; please apply your changes to the latest version and try again
	I0916 10:59:31.375364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.549144ms"
	I0916 10:59:31.375611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.864µs"
	I0916 10:59:33.510748       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.614µs"
	I0916 10:59:35.429150       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-244475-m04"
	I0916 10:59:35.430080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:59:35.447564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:59:36.276210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.461µs"
	I0916 10:59:37.162961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-244475-m04"
	I0916 10:59:39.862160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.417469ms"
	I0916 10:59:39.862368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.262µs"
	I0916 10:59:51.342203       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2clmh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2clmh\": the object has been modified; please apply your changes to the latest version and try again"
	I0916 10:59:51.342856       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"898f26d7-6b1e-4c76-924a-7be5818143ba", APIVersion:"v1", ResourceVersion:"288", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2clmh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2clmh": the object has been modified; please apply your changes to the latest version and try again
	I0916 10:59:51.373235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.497082ms"
	I0916 10:59:51.373453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.113µs"
	
	
	==> kube-proxy [2ef7bc6ba17087872de21708e4087c86e92125885155c397d03bed2863bb52ed] <==
	E0916 10:50:33.213866       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-244475\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 10:50:33.214328       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0916 10:50:33.214618       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:50:33.254822       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:50:33.254898       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:50:33.254936       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:50:33.257890       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:50:33.258306       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:50:33.258342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:33.259973       1 config.go:199] "Starting service config controller"
	I0916 10:50:33.260036       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:50:33.260076       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:50:33.260102       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:50:33.260908       1 config.go:328] "Starting node config controller"
	I0916 10:50:33.260937       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0916 10:50:36.287039       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0916 10:50:36.287174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.286395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.287954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:50:36.287411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:50:36.288233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0916 10:50:37.161128       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:50:37.161227       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:50:37.560852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [32d3a5cdb5fe3b2f51ebd673b58afaf704a6afc7561267fc0d30b43aee746851] <==
	E0916 10:58:16.316920       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0916 10:58:16.317123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:16.317354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:16.317188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:16.317415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:16.317244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:16.317461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:25.533207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:25.533302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:28.605837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:28.605965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:28.606065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:28.606104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0916 10:58:28.606173       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 10:58:40.893912       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0916 10:58:43.965567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:43.965721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:47.037952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:47.038038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-244475&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 10:58:50.108958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 10:58:50.109048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0916 10:58:53.181021       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 10:59:18.104113       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:59:31.006269       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:59:35.904187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4a08ddefb7e0fe6b934866af273b3dfce1bbf395ab649d1e9ddf610180effeb3] <==
	W0916 10:58:35.459213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.19:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:35.459299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.19:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:35.623262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:35.623355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:35.703390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.19:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:35.703484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.19:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:38.246430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.19:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:38.246639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.19:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:39.868273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:39.868421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:42.423842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.19:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:42.423939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.19:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:42.630646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:42.630743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:42.910297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:42.910371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:43.136053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:43.136224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:43.759325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:43.759460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:44.248197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.19:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:44.248293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.19:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:58:48.068069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:58:48.068185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	I0916 10:59:09.163906       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6c0110ceab6a6b07ad42ced3733baf8bab2d1cafffcbb49d7f182984734ea704] <==
	E0916 10:50:27.106286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:27.917399       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:27.917565       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.19:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.353853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.353900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.362689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.362727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.19:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:29.539820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:29.539945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.19:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.172233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.172367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.19:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:30.247772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	E0916 10:50:30.247816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.19:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.19:8443: connect: connection refused" logger="UnhandledError"
	W0916 10:50:32.800369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:50:32.801683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:50:32.801914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:50:32.801624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:50:32.802040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:50:43.636980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:52:48.001271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2v2jd\": pod busybox-7dff88458-2v2jd is already assigned to node \"ha-244475-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2v2jd" node="ha-244475-m04"
	E0916 10:52:48.002577       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ca60db2e-7e01-4fc9-ac6c-724930269681(default/busybox-7dff88458-2v2jd) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2v2jd"
	E0916 10:52:48.002701       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2v2jd\": pod busybox-7dff88458-2v2jd is already assigned to node \"ha-244475-m04\"" pod="default/busybox-7dff88458-2v2jd"
	I0916 10:52:48.002757       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2v2jd" node="ha-244475-m04"
	E0916 10:55:27.666354       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 10:59:07 ha-244475 kubelet[1309]: I0916 10:59:07.581655    1309 scope.go:117] "RemoveContainer" containerID="126a5b17b3e86e7df60cd69e11695d9d0f9f52d0712bffb662c8a47a1eda2850"
	Sep 16 10:59:07 ha-244475 kubelet[1309]: E0916 10:59:07.581869    1309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2e1264f7-2197-4821-8238-82fac849b145)\"" pod="kube-system/storage-provisioner" podUID="2e1264f7-2197-4821-8238-82fac849b145"
	Sep 16 10:59:12 ha-244475 kubelet[1309]: E0916 10:59:12.906165    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484352905738721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:12 ha-244475 kubelet[1309]: E0916 10:59:12.906486    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484352905738721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:19 ha-244475 kubelet[1309]: I0916 10:59:19.581188    1309 scope.go:117] "RemoveContainer" containerID="126a5b17b3e86e7df60cd69e11695d9d0f9f52d0712bffb662c8a47a1eda2850"
	Sep 16 10:59:19 ha-244475 kubelet[1309]: E0916 10:59:19.581847    1309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2e1264f7-2197-4821-8238-82fac849b145)\"" pod="kube-system/storage-provisioner" podUID="2e1264f7-2197-4821-8238-82fac849b145"
	Sep 16 10:59:22 ha-244475 kubelet[1309]: E0916 10:59:22.909161    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484362908628316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:22 ha-244475 kubelet[1309]: E0916 10:59:22.909493    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484362908628316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:30 ha-244475 kubelet[1309]: I0916 10:59:30.581580    1309 scope.go:117] "RemoveContainer" containerID="126a5b17b3e86e7df60cd69e11695d9d0f9f52d0712bffb662c8a47a1eda2850"
	Sep 16 10:59:30 ha-244475 kubelet[1309]: E0916 10:59:30.584267    1309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2e1264f7-2197-4821-8238-82fac849b145)\"" pod="kube-system/storage-provisioner" podUID="2e1264f7-2197-4821-8238-82fac849b145"
	Sep 16 10:59:32 ha-244475 kubelet[1309]: E0916 10:59:32.913562    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484372912333981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:32 ha-244475 kubelet[1309]: E0916 10:59:32.914007    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484372912333981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:41 ha-244475 kubelet[1309]: I0916 10:59:41.581063    1309 scope.go:117] "RemoveContainer" containerID="126a5b17b3e86e7df60cd69e11695d9d0f9f52d0712bffb662c8a47a1eda2850"
	Sep 16 10:59:41 ha-244475 kubelet[1309]: E0916 10:59:41.581263    1309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2e1264f7-2197-4821-8238-82fac849b145)\"" pod="kube-system/storage-provisioner" podUID="2e1264f7-2197-4821-8238-82fac849b145"
	Sep 16 10:59:42 ha-244475 kubelet[1309]: E0916 10:59:42.916692    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484382915850309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:42 ha-244475 kubelet[1309]: E0916 10:59:42.917045    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484382915850309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:52 ha-244475 kubelet[1309]: I0916 10:59:52.581793    1309 scope.go:117] "RemoveContainer" containerID="126a5b17b3e86e7df60cd69e11695d9d0f9f52d0712bffb662c8a47a1eda2850"
	Sep 16 10:59:52 ha-244475 kubelet[1309]: E0916 10:59:52.582488    1309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2e1264f7-2197-4821-8238-82fac849b145)\"" pod="kube-system/storage-provisioner" podUID="2e1264f7-2197-4821-8238-82fac849b145"
	Sep 16 10:59:52 ha-244475 kubelet[1309]: E0916 10:59:52.622136    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 10:59:52 ha-244475 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 10:59:52 ha-244475 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 10:59:52 ha-244475 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 10:59:52 ha-244475 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 10:59:52 ha-244475 kubelet[1309]: E0916 10:59:52.918910    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484392918565887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 10:59:52 ha-244475 kubelet[1309]: E0916 10:59:52.918941    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484392918565887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:59:56.096570   32305 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-244475 -n ha-244475
helpers_test.go:261: (dbg) Run:  kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (534.117µs)
helpers_test.go:263: kubectl --context ha-244475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartCluster (271.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-736061 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-736061 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": fork/exec /usr/local/bin/kubectl: exec format error (582.299µs)
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-736061 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": fork/exec /usr/local/bin/kubectl: exec format error
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-736061 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-736061 -n multinode-736061
helpers_test.go:244: <<< TestMultiNode/serial/MultiNodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 logs -n 25: (1.311083187s)
helpers_test.go:252: TestMultiNode/serial/MultiNodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-789477 ssh --                       | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:04 UTC | 16 Sep 24 11:04 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-789477                           | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:04 UTC | 16 Sep 24 11:04 UTC |
	| start   | -p mount-start-2-789477                           | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:04 UTC | 16 Sep 24 11:05 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC |                     |
	|         | --profile mount-start-2-789477                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-789477 ssh -- ls                    | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-789477 ssh --                       | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-789477                           | mount-start-2-789477 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	| delete  | -p mount-start-1-777774                           | mount-start-1-777774 | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:05 UTC |
	| start   | -p multinode-736061                               | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:05 UTC | 16 Sep 24 11:07 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- apply -f                   | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- rollout                    | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- get pods -o                | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- get pods -o                | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-754d4 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-g9fqk --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-754d4 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-g9fqk --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-754d4 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-g9fqk -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- get pods -o                | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-754d4                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-754d4 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-g9fqk                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-736061 -- exec                       | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | busybox-7dff88458-g9fqk -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| node    | add -p multinode-736061 -v 3                      | multinode-736061     | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:05:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:05:14.223845   36333 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:05:14.223984   36333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:14.223994   36333 out.go:358] Setting ErrFile to fd 2...
	I0916 11:05:14.223999   36333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:14.224200   36333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:05:14.224806   36333 out.go:352] Setting JSON to false
	I0916 11:05:14.225727   36333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2864,"bootTime":1726481850,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:05:14.225819   36333 start.go:139] virtualization: kvm guest
	I0916 11:05:14.228071   36333 out.go:177] * [multinode-736061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:05:14.229583   36333 notify.go:220] Checking for updates...
	I0916 11:05:14.229600   36333 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:05:14.231206   36333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:05:14.232749   36333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:14.234181   36333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:05:14.235512   36333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:05:14.236951   36333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:05:14.238333   36333 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:05:14.273835   36333 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 11:05:14.275189   36333 start.go:297] selected driver: kvm2
	I0916 11:05:14.275203   36333 start.go:901] validating driver "kvm2" against <nil>
	I0916 11:05:14.275215   36333 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:05:14.275970   36333 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:05:14.276060   36333 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:05:14.291713   36333 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:05:14.291764   36333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:05:14.292100   36333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:05:14.292145   36333 cni.go:84] Creating CNI manager for ""
	I0916 11:05:14.292195   36333 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 11:05:14.292207   36333 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:05:14.292273   36333 start.go:340] cluster config:
	{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:05:14.292418   36333 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:05:14.294221   36333 out.go:177] * Starting "multinode-736061" primary control-plane node in "multinode-736061" cluster
	I0916 11:05:14.295615   36333 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:05:14.295660   36333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:05:14.295673   36333 cache.go:56] Caching tarball of preloaded images
	I0916 11:05:14.295754   36333 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:05:14.295767   36333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:05:14.296098   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:05:14.296124   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json: {Name:mk24a1d206035e062b796738ad5d4a2fff193a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:14.296259   36333 start.go:360] acquireMachinesLock for multinode-736061: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:05:14.296287   36333 start.go:364] duration metric: took 15.67µs to acquireMachinesLock for "multinode-736061"
	I0916 11:05:14.296303   36333 start.go:93] Provisioning new machine with config: &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:05:14.296361   36333 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 11:05:14.298147   36333 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 11:05:14.298294   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:14.298341   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:14.313364   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0916 11:05:14.313819   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:14.314342   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:14.314361   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:14.314693   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:14.314921   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:14.315078   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:14.315233   36333 start.go:159] libmachine.API.Create for "multinode-736061" (driver="kvm2")
	I0916 11:05:14.315266   36333 client.go:168] LocalClient.Create starting
	I0916 11:05:14.315303   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 11:05:14.315351   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:05:14.315373   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:05:14.315435   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 11:05:14.315461   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:05:14.315487   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:05:14.315511   36333 main.go:141] libmachine: Running pre-create checks...
	I0916 11:05:14.315523   36333 main.go:141] libmachine: (multinode-736061) Calling .PreCreateCheck
	I0916 11:05:14.315987   36333 main.go:141] libmachine: (multinode-736061) Calling .GetConfigRaw
	I0916 11:05:14.316344   36333 main.go:141] libmachine: Creating machine...
	I0916 11:05:14.316359   36333 main.go:141] libmachine: (multinode-736061) Calling .Create
	I0916 11:05:14.316506   36333 main.go:141] libmachine: (multinode-736061) Creating KVM machine...
	I0916 11:05:14.317992   36333 main.go:141] libmachine: (multinode-736061) DBG | found existing default KVM network
	I0916 11:05:14.318708   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.318561   36356 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0916 11:05:14.318729   36333 main.go:141] libmachine: (multinode-736061) DBG | created network xml: 
	I0916 11:05:14.318738   36333 main.go:141] libmachine: (multinode-736061) DBG | <network>
	I0916 11:05:14.318744   36333 main.go:141] libmachine: (multinode-736061) DBG |   <name>mk-multinode-736061</name>
	I0916 11:05:14.318749   36333 main.go:141] libmachine: (multinode-736061) DBG |   <dns enable='no'/>
	I0916 11:05:14.318753   36333 main.go:141] libmachine: (multinode-736061) DBG |   
	I0916 11:05:14.318759   36333 main.go:141] libmachine: (multinode-736061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 11:05:14.318770   36333 main.go:141] libmachine: (multinode-736061) DBG |     <dhcp>
	I0916 11:05:14.318777   36333 main.go:141] libmachine: (multinode-736061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 11:05:14.318784   36333 main.go:141] libmachine: (multinode-736061) DBG |     </dhcp>
	I0916 11:05:14.318789   36333 main.go:141] libmachine: (multinode-736061) DBG |   </ip>
	I0916 11:05:14.318793   36333 main.go:141] libmachine: (multinode-736061) DBG |   
	I0916 11:05:14.318798   36333 main.go:141] libmachine: (multinode-736061) DBG | </network>
	I0916 11:05:14.318806   36333 main.go:141] libmachine: (multinode-736061) DBG | 
	I0916 11:05:14.323865   36333 main.go:141] libmachine: (multinode-736061) DBG | trying to create private KVM network mk-multinode-736061 192.168.39.0/24...
	I0916 11:05:14.391633   36333 main.go:141] libmachine: (multinode-736061) DBG | private KVM network mk-multinode-736061 192.168.39.0/24 created
	I0916 11:05:14.391667   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.391608   36356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:05:14.391700   36333 main.go:141] libmachine: (multinode-736061) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061 ...
	I0916 11:05:14.391716   36333 main.go:141] libmachine: (multinode-736061) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 11:05:14.391759   36333 main.go:141] libmachine: (multinode-736061) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 11:05:14.635189   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.635088   36356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa...
	I0916 11:05:14.708226   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.707982   36356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/multinode-736061.rawdisk...
	I0916 11:05:14.708249   36333 main.go:141] libmachine: (multinode-736061) DBG | Writing magic tar header
	I0916 11:05:14.708260   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061 (perms=drwx------)
	I0916 11:05:14.708270   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 11:05:14.708276   36333 main.go:141] libmachine: (multinode-736061) DBG | Writing SSH key tar header
	I0916 11:05:14.708283   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 11:05:14.708290   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 11:05:14.708296   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 11:05:14.708304   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 11:05:14.708308   36333 main.go:141] libmachine: (multinode-736061) Creating domain...
	I0916 11:05:14.708320   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.708097   36356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061 ...
	I0916 11:05:14.708327   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061
	I0916 11:05:14.708336   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 11:05:14.708342   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:05:14.708429   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 11:05:14.708467   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 11:05:14.708478   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins
	I0916 11:05:14.708483   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home
	I0916 11:05:14.708491   36333 main.go:141] libmachine: (multinode-736061) DBG | Skipping /home - not owner
	I0916 11:05:14.709442   36333 main.go:141] libmachine: (multinode-736061) define libvirt domain using xml: 
	I0916 11:05:14.709458   36333 main.go:141] libmachine: (multinode-736061) <domain type='kvm'>
	I0916 11:05:14.709467   36333 main.go:141] libmachine: (multinode-736061)   <name>multinode-736061</name>
	I0916 11:05:14.709481   36333 main.go:141] libmachine: (multinode-736061)   <memory unit='MiB'>2200</memory>
	I0916 11:05:14.709490   36333 main.go:141] libmachine: (multinode-736061)   <vcpu>2</vcpu>
	I0916 11:05:14.709497   36333 main.go:141] libmachine: (multinode-736061)   <features>
	I0916 11:05:14.709504   36333 main.go:141] libmachine: (multinode-736061)     <acpi/>
	I0916 11:05:14.709518   36333 main.go:141] libmachine: (multinode-736061)     <apic/>
	I0916 11:05:14.709529   36333 main.go:141] libmachine: (multinode-736061)     <pae/>
	I0916 11:05:14.709536   36333 main.go:141] libmachine: (multinode-736061)     
	I0916 11:05:14.709543   36333 main.go:141] libmachine: (multinode-736061)   </features>
	I0916 11:05:14.709554   36333 main.go:141] libmachine: (multinode-736061)   <cpu mode='host-passthrough'>
	I0916 11:05:14.709564   36333 main.go:141] libmachine: (multinode-736061)   
	I0916 11:05:14.709571   36333 main.go:141] libmachine: (multinode-736061)   </cpu>
	I0916 11:05:14.709578   36333 main.go:141] libmachine: (multinode-736061)   <os>
	I0916 11:05:14.709588   36333 main.go:141] libmachine: (multinode-736061)     <type>hvm</type>
	I0916 11:05:14.709597   36333 main.go:141] libmachine: (multinode-736061)     <boot dev='cdrom'/>
	I0916 11:05:14.709610   36333 main.go:141] libmachine: (multinode-736061)     <boot dev='hd'/>
	I0916 11:05:14.709643   36333 main.go:141] libmachine: (multinode-736061)     <bootmenu enable='no'/>
	I0916 11:05:14.709666   36333 main.go:141] libmachine: (multinode-736061)   </os>
	I0916 11:05:14.709673   36333 main.go:141] libmachine: (multinode-736061)   <devices>
	I0916 11:05:14.709680   36333 main.go:141] libmachine: (multinode-736061)     <disk type='file' device='cdrom'>
	I0916 11:05:14.709698   36333 main.go:141] libmachine: (multinode-736061)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/boot2docker.iso'/>
	I0916 11:05:14.709713   36333 main.go:141] libmachine: (multinode-736061)       <target dev='hdc' bus='scsi'/>
	I0916 11:05:14.709725   36333 main.go:141] libmachine: (multinode-736061)       <readonly/>
	I0916 11:05:14.709734   36333 main.go:141] libmachine: (multinode-736061)     </disk>
	I0916 11:05:14.709746   36333 main.go:141] libmachine: (multinode-736061)     <disk type='file' device='disk'>
	I0916 11:05:14.709758   36333 main.go:141] libmachine: (multinode-736061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 11:05:14.709774   36333 main.go:141] libmachine: (multinode-736061)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/multinode-736061.rawdisk'/>
	I0916 11:05:14.709785   36333 main.go:141] libmachine: (multinode-736061)       <target dev='hda' bus='virtio'/>
	I0916 11:05:14.709801   36333 main.go:141] libmachine: (multinode-736061)     </disk>
	I0916 11:05:14.709829   36333 main.go:141] libmachine: (multinode-736061)     <interface type='network'>
	I0916 11:05:14.709842   36333 main.go:141] libmachine: (multinode-736061)       <source network='mk-multinode-736061'/>
	I0916 11:05:14.709853   36333 main.go:141] libmachine: (multinode-736061)       <model type='virtio'/>
	I0916 11:05:14.709863   36333 main.go:141] libmachine: (multinode-736061)     </interface>
	I0916 11:05:14.709873   36333 main.go:141] libmachine: (multinode-736061)     <interface type='network'>
	I0916 11:05:14.709890   36333 main.go:141] libmachine: (multinode-736061)       <source network='default'/>
	I0916 11:05:14.709904   36333 main.go:141] libmachine: (multinode-736061)       <model type='virtio'/>
	I0916 11:05:14.709916   36333 main.go:141] libmachine: (multinode-736061)     </interface>
	I0916 11:05:14.709923   36333 main.go:141] libmachine: (multinode-736061)     <serial type='pty'>
	I0916 11:05:14.709932   36333 main.go:141] libmachine: (multinode-736061)       <target port='0'/>
	I0916 11:05:14.709941   36333 main.go:141] libmachine: (multinode-736061)     </serial>
	I0916 11:05:14.709952   36333 main.go:141] libmachine: (multinode-736061)     <console type='pty'>
	I0916 11:05:14.709962   36333 main.go:141] libmachine: (multinode-736061)       <target type='serial' port='0'/>
	I0916 11:05:14.709970   36333 main.go:141] libmachine: (multinode-736061)     </console>
	I0916 11:05:14.709992   36333 main.go:141] libmachine: (multinode-736061)     <rng model='virtio'>
	I0916 11:05:14.710011   36333 main.go:141] libmachine: (multinode-736061)       <backend model='random'>/dev/random</backend>
	I0916 11:05:14.710020   36333 main.go:141] libmachine: (multinode-736061)     </rng>
	I0916 11:05:14.710028   36333 main.go:141] libmachine: (multinode-736061)     
	I0916 11:05:14.710036   36333 main.go:141] libmachine: (multinode-736061)     
	I0916 11:05:14.710044   36333 main.go:141] libmachine: (multinode-736061)   </devices>
	I0916 11:05:14.710055   36333 main.go:141] libmachine: (multinode-736061) </domain>
	I0916 11:05:14.710158   36333 main.go:141] libmachine: (multinode-736061) 
	I0916 11:05:14.714475   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:e4:3e:ff in network default
	I0916 11:05:14.715227   36333 main.go:141] libmachine: (multinode-736061) Ensuring networks are active...
	I0916 11:05:14.715242   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:14.715961   36333 main.go:141] libmachine: (multinode-736061) Ensuring network default is active
	I0916 11:05:14.716252   36333 main.go:141] libmachine: (multinode-736061) Ensuring network mk-multinode-736061 is active
	I0916 11:05:14.716836   36333 main.go:141] libmachine: (multinode-736061) Getting domain xml...
	I0916 11:05:14.717658   36333 main.go:141] libmachine: (multinode-736061) Creating domain...
	I0916 11:05:15.920598   36333 main.go:141] libmachine: (multinode-736061) Waiting to get IP...
	I0916 11:05:15.921389   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:15.921798   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:15.921861   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:15.921786   36356 retry.go:31] will retry after 223.192284ms: waiting for machine to come up
	I0916 11:05:16.146274   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:16.146739   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:16.146767   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:16.146689   36356 retry.go:31] will retry after 252.499488ms: waiting for machine to come up
	I0916 11:05:16.401280   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:16.401740   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:16.401759   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:16.401697   36356 retry.go:31] will retry after 482.760363ms: waiting for machine to come up
	I0916 11:05:16.886298   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:16.886830   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:16.886865   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:16.886749   36356 retry.go:31] will retry after 439.063598ms: waiting for machine to come up
	I0916 11:05:17.326932   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:17.327400   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:17.327423   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:17.327352   36356 retry.go:31] will retry after 505.8946ms: waiting for machine to come up
	I0916 11:05:17.835052   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:17.835477   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:17.835502   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:17.835432   36356 retry.go:31] will retry after 717.593659ms: waiting for machine to come up
	I0916 11:05:18.554420   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:18.554893   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:18.554930   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:18.554829   36356 retry.go:31] will retry after 1.016278613s: waiting for machine to come up
	I0916 11:05:19.572904   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:19.573341   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:19.573364   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:19.573302   36356 retry.go:31] will retry after 1.277341936s: waiting for machine to come up
	I0916 11:05:20.852855   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:20.853321   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:20.853351   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:20.853297   36356 retry.go:31] will retry after 1.793810706s: waiting for machine to come up
	I0916 11:05:22.649467   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:22.649908   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:22.649931   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:22.649869   36356 retry.go:31] will retry after 2.307737171s: waiting for machine to come up
	I0916 11:05:24.959386   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:24.959782   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:24.959810   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:24.959752   36356 retry.go:31] will retry after 1.783352311s: waiting for machine to come up
	I0916 11:05:26.745737   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:26.746182   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:26.746196   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:26.746148   36356 retry.go:31] will retry after 3.631719991s: waiting for machine to come up
	I0916 11:05:30.379263   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:30.379706   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:30.379735   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:30.379652   36356 retry.go:31] will retry after 2.815578177s: waiting for machine to come up
	I0916 11:05:33.198465   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:33.198966   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:33.198991   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:33.198922   36356 retry.go:31] will retry after 3.799964021s: waiting for machine to come up
	I0916 11:05:37.002591   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.003027   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has current primary IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.003052   36333 main.go:141] libmachine: (multinode-736061) Found IP for machine: 192.168.39.32
	I0916 11:05:37.003065   36333 main.go:141] libmachine: (multinode-736061) Reserving static IP address...
	I0916 11:05:37.003449   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find host DHCP lease matching {name: "multinode-736061", mac: "52:54:00:c1:52:21", ip: "192.168.39.32"} in network mk-multinode-736061
	I0916 11:05:37.077828   36333 main.go:141] libmachine: (multinode-736061) DBG | Getting to WaitForSSH function...
	I0916 11:05:37.077850   36333 main.go:141] libmachine: (multinode-736061) Reserved static IP address: 192.168.39.32
	I0916 11:05:37.077862   36333 main.go:141] libmachine: (multinode-736061) Waiting for SSH to be available...
	I0916 11:05:37.080375   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.080796   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.080828   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.080993   36333 main.go:141] libmachine: (multinode-736061) DBG | Using SSH client type: external
	I0916 11:05:37.081020   36333 main.go:141] libmachine: (multinode-736061) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa (-rw-------)
	I0916 11:05:37.081050   36333 main.go:141] libmachine: (multinode-736061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:05:37.081062   36333 main.go:141] libmachine: (multinode-736061) DBG | About to run SSH command:
	I0916 11:05:37.081074   36333 main.go:141] libmachine: (multinode-736061) DBG | exit 0
	I0916 11:05:37.209566   36333 main.go:141] libmachine: (multinode-736061) DBG | SSH cmd err, output: <nil>: 
	I0916 11:05:37.209898   36333 main.go:141] libmachine: (multinode-736061) KVM machine creation complete!
	I0916 11:05:37.210248   36333 main.go:141] libmachine: (multinode-736061) Calling .GetConfigRaw
	I0916 11:05:37.210834   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:37.211040   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:37.211180   36333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 11:05:37.211197   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:37.212405   36333 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 11:05:37.212417   36333 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 11:05:37.212422   36333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 11:05:37.212451   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.214767   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.215122   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.215146   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.215270   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.215430   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.215573   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.215674   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.215811   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.215994   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.216004   36333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 11:05:37.324665   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:05:37.324687   36333 main.go:141] libmachine: Detecting the provisioner...
	I0916 11:05:37.324695   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.327356   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.327742   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.327765   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.327962   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.328147   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.328297   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.328424   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.328544   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.328712   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.328721   36333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 11:05:37.438637   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 11:05:37.438699   36333 main.go:141] libmachine: found compatible host: buildroot
	I0916 11:05:37.438705   36333 main.go:141] libmachine: Provisioning with buildroot...
	I0916 11:05:37.438712   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:37.438954   36333 buildroot.go:166] provisioning hostname "multinode-736061"
	I0916 11:05:37.438983   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:37.439145   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.441912   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.442287   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.442323   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.442444   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.442627   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.442759   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.442876   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.443043   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.443230   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.443245   36333 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061 && echo "multinode-736061" | sudo tee /etc/hostname
	I0916 11:05:37.568321   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:05:37.568348   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.571043   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.571306   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.571337   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.571508   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.571675   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.571803   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.571940   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.572158   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.572336   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.572359   36333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:05:37.690231   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:05:37.690283   36333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:05:37.690315   36333 buildroot.go:174] setting up certificates
	I0916 11:05:37.690325   36333 provision.go:84] configureAuth start
	I0916 11:05:37.690334   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:37.690613   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:37.693814   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.694221   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.694249   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.694359   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.696453   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.696896   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.696929   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.697113   36333 provision.go:143] copyHostCerts
	I0916 11:05:37.697156   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:05:37.697191   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:05:37.697214   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:05:37.697279   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:05:37.697394   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:05:37.697428   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:05:37.697437   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:05:37.697468   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:05:37.697543   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:05:37.697565   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:05:37.697574   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:05:37.697603   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:05:37.697684   36333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-736061]
	I0916 11:05:37.755498   36333 provision.go:177] copyRemoteCerts
	I0916 11:05:37.755561   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:05:37.755585   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.758016   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.758372   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.758398   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.758541   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.758722   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.758852   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.758993   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:37.844283   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:05:37.844364   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:05:37.868824   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:05:37.868898   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:05:37.893315   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:05:37.893390   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:05:37.918259   36333 provision.go:87] duration metric: took 227.922707ms to configureAuth
	I0916 11:05:37.918284   36333 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:05:37.918465   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:05:37.918535   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.921204   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.921532   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.921571   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.921782   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.921968   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.922114   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.922246   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.922383   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.922547   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.922561   36333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:05:38.158725   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:05:38.158757   36333 main.go:141] libmachine: Checking connection to Docker...
	I0916 11:05:38.158768   36333 main.go:141] libmachine: (multinode-736061) Calling .GetURL
	I0916 11:05:38.159927   36333 main.go:141] libmachine: (multinode-736061) DBG | Using libvirt version 6000000
	I0916 11:05:38.162000   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.162328   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.162348   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.162524   36333 main.go:141] libmachine: Docker is up and running!
	I0916 11:05:38.162535   36333 main.go:141] libmachine: Reticulating splines...
	I0916 11:05:38.162541   36333 client.go:171] duration metric: took 23.847265768s to LocalClient.Create
	I0916 11:05:38.162563   36333 start.go:167] duration metric: took 23.847331794s to libmachine.API.Create "multinode-736061"
	I0916 11:05:38.162572   36333 start.go:293] postStartSetup for "multinode-736061" (driver="kvm2")
	I0916 11:05:38.162587   36333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:05:38.162609   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.162811   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:05:38.162832   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.165012   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.165330   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.165353   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.165518   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.165715   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.165857   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.166003   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:38.253609   36333 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:05:38.257916   36333 command_runner.go:130] > NAME=Buildroot
	I0916 11:05:38.257936   36333 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:05:38.257941   36333 command_runner.go:130] > ID=buildroot
	I0916 11:05:38.257946   36333 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:05:38.257951   36333 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:05:38.258214   36333 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:05:38.258231   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:05:38.258293   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:05:38.258382   36333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:05:38.258394   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:05:38.258480   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:05:38.270166   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:05:38.296379   36333 start.go:296] duration metric: took 133.789681ms for postStartSetup
	I0916 11:05:38.296431   36333 main.go:141] libmachine: (multinode-736061) Calling .GetConfigRaw
	I0916 11:05:38.297043   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:38.299668   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.300016   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.300042   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.300311   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:05:38.300500   36333 start.go:128] duration metric: took 24.004129957s to createHost
	I0916 11:05:38.300522   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.302695   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.302982   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.303009   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.303135   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.303315   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.303448   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.303555   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.303766   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:38.303988   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:38.304015   36333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:05:38.414103   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726484738.390254572
	
	I0916 11:05:38.414126   36333 fix.go:216] guest clock: 1726484738.390254572
	I0916 11:05:38.414133   36333 fix.go:229] Guest: 2024-09-16 11:05:38.390254572 +0000 UTC Remote: 2024-09-16 11:05:38.300511058 +0000 UTC m=+24.111459581 (delta=89.743514ms)
	I0916 11:05:38.414152   36333 fix.go:200] guest clock delta is within tolerance: 89.743514ms
	I0916 11:05:38.414156   36333 start.go:83] releasing machines lock for "multinode-736061", held for 24.117861591s
	I0916 11:05:38.414172   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.414417   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:38.416822   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.417114   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.417158   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.417310   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.417820   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.417984   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.418077   36333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:05:38.418117   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.418222   36333 ssh_runner.go:195] Run: cat /version.json
	I0916 11:05:38.418262   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.420987   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421076   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421362   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.421406   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.421430   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421445   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421558   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.421704   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.421766   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.421888   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.421905   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.422061   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.422072   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:38.422199   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:38.498269   36333 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 11:05:38.498534   36333 ssh_runner.go:195] Run: systemctl --version
	I0916 11:05:38.525682   36333 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:05:38.525778   36333 command_runner.go:130] > systemd 252 (252)
	I0916 11:05:38.525815   36333 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 11:05:38.525931   36333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:05:38.683317   36333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:05:38.689797   36333 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:05:38.690100   36333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:05:38.690164   36333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:05:38.706222   36333 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0916 11:05:38.706278   36333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 11:05:38.706288   36333 start.go:495] detecting cgroup driver to use...
	I0916 11:05:38.706372   36333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:05:38.723218   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:05:38.737310   36333 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:05:38.737379   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:05:38.751153   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:05:38.765082   36333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:05:38.890954   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/cri-docker.socket".
	I0916 11:05:38.891373   36333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:05:38.910122   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 11:05:39.046826   36333 docker.go:233] disabling docker service ...
	I0916 11:05:39.046929   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:05:39.061765   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:05:39.074270   36333 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0916 11:05:39.074777   36333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:05:39.090011   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/docker.socket".
	I0916 11:05:39.201766   36333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:05:39.329477   36333 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0916 11:05:39.329506   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 11:05:39.329726   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:05:39.343852   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:05:39.362256   36333 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:05:39.362530   36333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:05:39.362586   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.373046   36333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:05:39.373113   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.383615   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.394098   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.404446   36333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:05:39.415178   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.425488   36333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.442620   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.453139   36333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:05:39.462440   36333 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:05:39.462485   36333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:05:39.462555   36333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 11:05:39.475750   36333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:05:39.485289   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:05:39.608605   36333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:05:39.700595   36333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:05:39.700670   36333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:05:39.705387   36333 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:05:39.705420   36333 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:05:39.705429   36333 command_runner.go:130] > Device: 0,22	Inode: 693         Links: 1
	I0916 11:05:39.705439   36333 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:05:39.705447   36333 command_runner.go:130] > Access: 2024-09-16 11:05:39.669540055 +0000
	I0916 11:05:39.705455   36333 command_runner.go:130] > Modify: 2024-09-16 11:05:39.669540055 +0000
	I0916 11:05:39.705462   36333 command_runner.go:130] > Change: 2024-09-16 11:05:39.669540055 +0000
	I0916 11:05:39.705468   36333 command_runner.go:130] >  Birth: -
	I0916 11:05:39.705523   36333 start.go:563] Will wait 60s for crictl version
	I0916 11:05:39.705595   36333 ssh_runner.go:195] Run: which crictl
	I0916 11:05:39.709396   36333 command_runner.go:130] > /usr/bin/crictl
	I0916 11:05:39.709459   36333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:05:39.755216   36333 command_runner.go:130] > Version:  0.1.0
	I0916 11:05:39.755237   36333 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:05:39.755241   36333 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:05:39.755246   36333 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:05:39.755263   36333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:05:39.755341   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:05:39.782225   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:05:39.782248   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:05:39.782254   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:05:39.782258   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:05:39.782261   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:05:39.782267   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:05:39.782271   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:05:39.782275   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:05:39.782281   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:05:39.782287   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:05:39.782301   36333 command_runner.go:130] > BuildTags:      
	I0916 11:05:39.782308   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:05:39.782315   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:05:39.782323   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:05:39.782328   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:05:39.782336   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:05:39.782349   36333 command_runner.go:130] >   seccomp
	I0916 11:05:39.782356   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:05:39.782360   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:05:39.782364   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:05:39.783475   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:05:39.810183   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:05:39.810214   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:05:39.810244   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:05:39.810252   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:05:39.810259   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:05:39.810274   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:05:39.810284   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:05:39.810291   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:05:39.810300   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:05:39.810310   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:05:39.810320   36333 command_runner.go:130] > BuildTags:      
	I0916 11:05:39.810330   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:05:39.810338   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:05:39.810348   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:05:39.810355   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:05:39.810366   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:05:39.810374   36333 command_runner.go:130] >   seccomp
	I0916 11:05:39.810384   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:05:39.810394   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:05:39.810403   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:05:39.813350   36333 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:05:39.814716   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:39.817197   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:39.817500   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:39.817523   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:39.817727   36333 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:05:39.822032   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:05:39.834441   36333 kubeadm.go:883] updating cluster {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:05:39.834570   36333 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:05:39.834625   36333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:05:39.864349   36333 command_runner.go:130] > {
	I0916 11:05:39.864374   36333 command_runner.go:130] >   "images": [
	I0916 11:05:39.864379   36333 command_runner.go:130] >   ]
	I0916 11:05:39.864396   36333 command_runner.go:130] > }
	I0916 11:05:39.864661   36333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:05:39.864731   36333 ssh_runner.go:195] Run: which lz4
	I0916 11:05:39.868660   36333 command_runner.go:130] > /usr/bin/lz4
	I0916 11:05:39.868697   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 11:05:39.868790   36333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:05:39.872970   36333 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:05:39.873016   36333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:05:39.873041   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 11:05:41.190632   36333 crio.go:462] duration metric: took 1.321858637s to copy over tarball
	I0916 11:05:41.190715   36333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:05:43.168588   36333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977833737s)
	I0916 11:05:43.168613   36333 crio.go:469] duration metric: took 1.977949269s to extract the tarball
	I0916 11:05:43.168621   36333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:05:43.204999   36333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:05:43.248459   36333 command_runner.go:130] > {
	I0916 11:05:43.248479   36333 command_runner.go:130] >   "images": [
	I0916 11:05:43.248483   36333 command_runner.go:130] >     {
	I0916 11:05:43.248496   36333 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:05:43.248502   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248508   36333 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:05:43.248511   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248515   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248525   36333 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:05:43.248534   36333 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:05:43.248544   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248552   36333 command_runner.go:130] >       "size": "87190579",
	I0916 11:05:43.248556   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.248562   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248570   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248576   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248579   36333 command_runner.go:130] >     },
	I0916 11:05:43.248583   36333 command_runner.go:130] >     {
	I0916 11:05:43.248589   36333 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:05:43.248595   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248600   36333 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:05:43.248603   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248608   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248615   36333 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:05:43.248624   36333 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:05:43.248628   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248639   36333 command_runner.go:130] >       "size": "31470524",
	I0916 11:05:43.248645   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.248649   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248655   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248659   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248664   36333 command_runner.go:130] >     },
	I0916 11:05:43.248667   36333 command_runner.go:130] >     {
	I0916 11:05:43.248678   36333 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:05:43.248683   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248690   36333 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:05:43.248694   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248698   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248708   36333 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:05:43.248715   36333 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:05:43.248721   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248725   36333 command_runner.go:130] >       "size": "63273227",
	I0916 11:05:43.248729   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.248733   36333 command_runner.go:130] >       "username": "nonroot",
	I0916 11:05:43.248739   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248743   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248748   36333 command_runner.go:130] >     },
	I0916 11:05:43.248751   36333 command_runner.go:130] >     {
	I0916 11:05:43.248759   36333 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:05:43.248764   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248770   36333 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:05:43.248776   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248782   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248795   36333 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:05:43.248811   36333 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:05:43.248819   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248826   36333 command_runner.go:130] >       "size": "149009664",
	I0916 11:05:43.248834   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.248841   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.248849   36333 command_runner.go:130] >       },
	I0916 11:05:43.248855   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248864   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248870   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248877   36333 command_runner.go:130] >     },
	I0916 11:05:43.248883   36333 command_runner.go:130] >     {
	I0916 11:05:43.248894   36333 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:05:43.248902   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248912   36333 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:05:43.248917   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248921   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248928   36333 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:05:43.248937   36333 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:05:43.248941   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248945   36333 command_runner.go:130] >       "size": "95237600",
	I0916 11:05:43.248949   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.248953   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.248956   36333 command_runner.go:130] >       },
	I0916 11:05:43.248961   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248965   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248969   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248974   36333 command_runner.go:130] >     },
	I0916 11:05:43.248977   36333 command_runner.go:130] >     {
	I0916 11:05:43.248983   36333 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:05:43.248990   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248995   36333 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:05:43.248998   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249002   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249010   36333 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:05:43.249019   36333 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:05:43.249023   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249028   36333 command_runner.go:130] >       "size": "89437508",
	I0916 11:05:43.249032   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.249036   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.249041   36333 command_runner.go:130] >       },
	I0916 11:05:43.249048   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249054   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249064   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.249069   36333 command_runner.go:130] >     },
	I0916 11:05:43.249076   36333 command_runner.go:130] >     {
	I0916 11:05:43.249087   36333 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:05:43.249096   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.249104   36333 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:05:43.249113   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249119   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249151   36333 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:05:43.249166   36333 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:05:43.249171   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249176   36333 command_runner.go:130] >       "size": "92733849",
	I0916 11:05:43.249181   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.249188   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249194   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249202   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.249209   36333 command_runner.go:130] >     },
	I0916 11:05:43.249214   36333 command_runner.go:130] >     {
	I0916 11:05:43.249227   36333 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:05:43.249233   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.249243   36333 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:05:43.249249   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249257   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249277   36333 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:05:43.249291   36333 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:05:43.249300   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249310   36333 command_runner.go:130] >       "size": "68420934",
	I0916 11:05:43.249315   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.249325   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.249330   36333 command_runner.go:130] >       },
	I0916 11:05:43.249337   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249346   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249352   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.249359   36333 command_runner.go:130] >     },
	I0916 11:05:43.249364   36333 command_runner.go:130] >     {
	I0916 11:05:43.249377   36333 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:05:43.249388   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.249397   36333 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:05:43.249405   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249411   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249420   36333 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:05:43.249427   36333 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:05:43.249433   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249436   36333 command_runner.go:130] >       "size": "742080",
	I0916 11:05:43.249440   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.249445   36333 command_runner.go:130] >         "value": "65535"
	I0916 11:05:43.249448   36333 command_runner.go:130] >       },
	I0916 11:05:43.249452   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249456   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249460   36333 command_runner.go:130] >       "pinned": true
	I0916 11:05:43.249463   36333 command_runner.go:130] >     }
	I0916 11:05:43.249466   36333 command_runner.go:130] >   ]
	I0916 11:05:43.249469   36333 command_runner.go:130] > }
	I0916 11:05:43.249620   36333 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:05:43.249638   36333 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:05:43.249647   36333 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 11:05:43.249752   36333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:05:43.249832   36333 ssh_runner.go:195] Run: crio config
	I0916 11:05:43.282902   36333 command_runner.go:130] ! time="2024-09-16 11:05:43.265188750Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 11:05:43.288223   36333 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 11:05:43.294413   36333 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 11:05:43.294444   36333 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 11:05:43.294454   36333 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 11:05:43.294460   36333 command_runner.go:130] > #
	I0916 11:05:43.294470   36333 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 11:05:43.294481   36333 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 11:05:43.294494   36333 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 11:05:43.294505   36333 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 11:05:43.294509   36333 command_runner.go:130] > # reload'.
	I0916 11:05:43.294515   36333 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 11:05:43.294523   36333 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 11:05:43.294557   36333 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 11:05:43.294566   36333 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 11:05:43.294570   36333 command_runner.go:130] > [crio]
	I0916 11:05:43.294576   36333 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 11:05:43.294584   36333 command_runner.go:130] > # containers images, in this directory.
	I0916 11:05:43.294589   36333 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 11:05:43.294599   36333 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 11:05:43.294604   36333 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 11:05:43.294615   36333 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 11:05:43.294620   36333 command_runner.go:130] > # imagestore = ""
	I0916 11:05:43.294631   36333 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 11:05:43.294637   36333 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 11:05:43.294643   36333 command_runner.go:130] > storage_driver = "overlay"
	I0916 11:05:43.294649   36333 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 11:05:43.294657   36333 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 11:05:43.294661   36333 command_runner.go:130] > storage_option = [
	I0916 11:05:43.294667   36333 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 11:05:43.294671   36333 command_runner.go:130] > ]
	I0916 11:05:43.294677   36333 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 11:05:43.294685   36333 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 11:05:43.294690   36333 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 11:05:43.294697   36333 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 11:05:43.294703   36333 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 11:05:43.294709   36333 command_runner.go:130] > # always happen on a node reboot
	I0916 11:05:43.294713   36333 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 11:05:43.294724   36333 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 11:05:43.294734   36333 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 11:05:43.294757   36333 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 11:05:43.294771   36333 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 11:05:43.294782   36333 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 11:05:43.294798   36333 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 11:05:43.294807   36333 command_runner.go:130] > # internal_wipe = true
	I0916 11:05:43.294817   36333 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 11:05:43.294831   36333 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 11:05:43.294838   36333 command_runner.go:130] > # internal_repair = false
	I0916 11:05:43.294844   36333 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 11:05:43.294852   36333 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 11:05:43.294857   36333 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 11:05:43.294868   36333 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 11:05:43.294876   36333 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 11:05:43.294880   36333 command_runner.go:130] > [crio.api]
	I0916 11:05:43.294885   36333 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 11:05:43.294890   36333 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 11:05:43.294895   36333 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 11:05:43.294902   36333 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 11:05:43.294908   36333 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 11:05:43.294915   36333 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 11:05:43.294919   36333 command_runner.go:130] > # stream_port = "0"
	I0916 11:05:43.294927   36333 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 11:05:43.294931   36333 command_runner.go:130] > # stream_enable_tls = false
	I0916 11:05:43.294939   36333 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 11:05:43.294943   36333 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 11:05:43.294951   36333 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 11:05:43.294957   36333 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 11:05:43.294963   36333 command_runner.go:130] > # minutes.
	I0916 11:05:43.294966   36333 command_runner.go:130] > # stream_tls_cert = ""
	I0916 11:05:43.294974   36333 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 11:05:43.294979   36333 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 11:05:43.294988   36333 command_runner.go:130] > # stream_tls_key = ""
	I0916 11:05:43.294994   36333 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 11:05:43.294999   36333 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 11:05:43.295023   36333 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 11:05:43.295029   36333 command_runner.go:130] > # stream_tls_ca = ""
	I0916 11:05:43.295036   36333 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:05:43.295042   36333 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 11:05:43.295049   36333 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:05:43.295060   36333 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 11:05:43.295066   36333 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 11:05:43.295074   36333 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 11:05:43.295078   36333 command_runner.go:130] > [crio.runtime]
	I0916 11:05:43.295083   36333 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 11:05:43.295089   36333 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 11:05:43.295093   36333 command_runner.go:130] > # "nofile=1024:2048"
	I0916 11:05:43.295099   36333 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 11:05:43.295105   36333 command_runner.go:130] > # default_ulimits = [
	I0916 11:05:43.295108   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295113   36333 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 11:05:43.295119   36333 command_runner.go:130] > # no_pivot = false
	I0916 11:05:43.295125   36333 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 11:05:43.295131   36333 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 11:05:43.295136   36333 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 11:05:43.295143   36333 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 11:05:43.295148   36333 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 11:05:43.295156   36333 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:05:43.295161   36333 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 11:05:43.295165   36333 command_runner.go:130] > # Cgroup setting for conmon
	I0916 11:05:43.295173   36333 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 11:05:43.295179   36333 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 11:05:43.295184   36333 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 11:05:43.295191   36333 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 11:05:43.295197   36333 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:05:43.295202   36333 command_runner.go:130] > conmon_env = [
	I0916 11:05:43.295207   36333 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:05:43.295213   36333 command_runner.go:130] > ]
	I0916 11:05:43.295218   36333 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 11:05:43.295224   36333 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 11:05:43.295230   36333 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 11:05:43.295235   36333 command_runner.go:130] > # default_env = [
	I0916 11:05:43.295239   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295248   36333 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 11:05:43.295257   36333 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 11:05:43.295260   36333 command_runner.go:130] > # selinux = false
	I0916 11:05:43.295267   36333 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 11:05:43.295274   36333 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 11:05:43.295279   36333 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 11:05:43.295284   36333 command_runner.go:130] > # seccomp_profile = ""
	I0916 11:05:43.295289   36333 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 11:05:43.295297   36333 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 11:05:43.295305   36333 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 11:05:43.295311   36333 command_runner.go:130] > # which might increase security.
	I0916 11:05:43.295315   36333 command_runner.go:130] > # This option is currently deprecated,
	I0916 11:05:43.295322   36333 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 11:05:43.295327   36333 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 11:05:43.295333   36333 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 11:05:43.295341   36333 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 11:05:43.295347   36333 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 11:05:43.295354   36333 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 11:05:43.295359   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.295364   36333 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 11:05:43.295369   36333 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 11:05:43.295375   36333 command_runner.go:130] > # the cgroup blockio controller.
	I0916 11:05:43.295379   36333 command_runner.go:130] > # blockio_config_file = ""
	I0916 11:05:43.295385   36333 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 11:05:43.295390   36333 command_runner.go:130] > # blockio parameters.
	I0916 11:05:43.295394   36333 command_runner.go:130] > # blockio_reload = false
	I0916 11:05:43.295400   36333 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 11:05:43.295406   36333 command_runner.go:130] > # irqbalance daemon.
	I0916 11:05:43.295410   36333 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 11:05:43.295416   36333 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 11:05:43.295423   36333 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 11:05:43.295429   36333 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 11:05:43.295436   36333 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 11:05:43.295445   36333 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 11:05:43.295452   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.295456   36333 command_runner.go:130] > # rdt_config_file = ""
	I0916 11:05:43.295463   36333 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 11:05:43.295467   36333 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 11:05:43.295499   36333 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 11:05:43.295507   36333 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 11:05:43.295513   36333 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 11:05:43.295518   36333 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 11:05:43.295522   36333 command_runner.go:130] > # will be added.
	I0916 11:05:43.295526   36333 command_runner.go:130] > # default_capabilities = [
	I0916 11:05:43.295530   36333 command_runner.go:130] > # 	"CHOWN",
	I0916 11:05:43.295534   36333 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 11:05:43.295538   36333 command_runner.go:130] > # 	"FSETID",
	I0916 11:05:43.295544   36333 command_runner.go:130] > # 	"FOWNER",
	I0916 11:05:43.295547   36333 command_runner.go:130] > # 	"SETGID",
	I0916 11:05:43.295550   36333 command_runner.go:130] > # 	"SETUID",
	I0916 11:05:43.295554   36333 command_runner.go:130] > # 	"SETPCAP",
	I0916 11:05:43.295558   36333 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 11:05:43.295561   36333 command_runner.go:130] > # 	"KILL",
	I0916 11:05:43.295564   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295573   36333 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 11:05:43.295582   36333 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 11:05:43.295586   36333 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 11:05:43.295594   36333 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 11:05:43.295600   36333 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:05:43.295606   36333 command_runner.go:130] > default_sysctls = [
	I0916 11:05:43.295610   36333 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 11:05:43.295615   36333 command_runner.go:130] > ]
	I0916 11:05:43.295621   36333 command_runner.go:130] > # List of devices on the host that a
	I0916 11:05:43.295627   36333 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 11:05:43.295633   36333 command_runner.go:130] > # allowed_devices = [
	I0916 11:05:43.295637   36333 command_runner.go:130] > # 	"/dev/fuse",
	I0916 11:05:43.295644   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295652   36333 command_runner.go:130] > # List of additional devices. specified as
	I0916 11:05:43.295658   36333 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 11:05:43.295666   36333 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 11:05:43.295671   36333 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:05:43.295677   36333 command_runner.go:130] > # additional_devices = [
	I0916 11:05:43.295681   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295685   36333 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 11:05:43.295691   36333 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 11:05:43.295694   36333 command_runner.go:130] > # 	"/etc/cdi",
	I0916 11:05:43.295698   36333 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 11:05:43.295701   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295709   36333 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 11:05:43.295714   36333 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 11:05:43.295720   36333 command_runner.go:130] > # Defaults to false.
	I0916 11:05:43.295724   36333 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 11:05:43.295732   36333 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 11:05:43.295745   36333 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 11:05:43.295754   36333 command_runner.go:130] > # hooks_dir = [
	I0916 11:05:43.295762   36333 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 11:05:43.295771   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295781   36333 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 11:05:43.295793   36333 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 11:05:43.295804   36333 command_runner.go:130] > # its default mounts from the following two files:
	I0916 11:05:43.295809   36333 command_runner.go:130] > #
	I0916 11:05:43.295816   36333 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 11:05:43.295825   36333 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 11:05:43.295830   36333 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 11:05:43.295836   36333 command_runner.go:130] > #
	I0916 11:05:43.295841   36333 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 11:05:43.295847   36333 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 11:05:43.295855   36333 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 11:05:43.295864   36333 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 11:05:43.295874   36333 command_runner.go:130] > #
	I0916 11:05:43.295881   36333 command_runner.go:130] > # default_mounts_file = ""
	I0916 11:05:43.295886   36333 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 11:05:43.295893   36333 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 11:05:43.295898   36333 command_runner.go:130] > pids_limit = 1024
	I0916 11:05:43.295904   36333 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 11:05:43.295912   36333 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 11:05:43.295918   36333 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 11:05:43.295928   36333 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 11:05:43.295932   36333 command_runner.go:130] > # log_size_max = -1
	I0916 11:05:43.295941   36333 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 11:05:43.295945   36333 command_runner.go:130] > # log_to_journald = false
	I0916 11:05:43.295953   36333 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 11:05:43.295957   36333 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 11:05:43.295962   36333 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 11:05:43.295969   36333 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 11:05:43.295975   36333 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 11:05:43.295980   36333 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 11:05:43.295985   36333 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 11:05:43.295989   36333 command_runner.go:130] > # read_only = false
	I0916 11:05:43.295995   36333 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 11:05:43.296001   36333 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 11:05:43.296007   36333 command_runner.go:130] > # live configuration reload.
	I0916 11:05:43.296013   36333 command_runner.go:130] > # log_level = "info"
	I0916 11:05:43.296018   36333 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 11:05:43.296023   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.296026   36333 command_runner.go:130] > # log_filter = ""
	I0916 11:05:43.296032   36333 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 11:05:43.296042   36333 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 11:05:43.296045   36333 command_runner.go:130] > # separated by comma.
	I0916 11:05:43.296052   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296058   36333 command_runner.go:130] > # uid_mappings = ""
	I0916 11:05:43.296064   36333 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 11:05:43.296074   36333 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 11:05:43.296080   36333 command_runner.go:130] > # separated by comma.
	I0916 11:05:43.296088   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296094   36333 command_runner.go:130] > # gid_mappings = ""
	I0916 11:05:43.296100   36333 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 11:05:43.296108   36333 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:05:43.296116   36333 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:05:43.296125   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296129   36333 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 11:05:43.296134   36333 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 11:05:43.296140   36333 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:05:43.296148   36333 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:05:43.296156   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296162   36333 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 11:05:43.296168   36333 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 11:05:43.296176   36333 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 11:05:43.296181   36333 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 11:05:43.296186   36333 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 11:05:43.296191   36333 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 11:05:43.296199   36333 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 11:05:43.296203   36333 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 11:05:43.296210   36333 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 11:05:43.296214   36333 command_runner.go:130] > drop_infra_ctr = false
	I0916 11:05:43.296222   36333 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 11:05:43.296227   36333 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 11:05:43.296241   36333 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 11:05:43.296247   36333 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 11:05:43.296254   36333 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 11:05:43.296260   36333 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 11:05:43.296265   36333 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 11:05:43.296274   36333 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 11:05:43.296278   36333 command_runner.go:130] > # shared_cpuset = ""
	I0916 11:05:43.296285   36333 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 11:05:43.296294   36333 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 11:05:43.296300   36333 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 11:05:43.296307   36333 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 11:05:43.296313   36333 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 11:05:43.296318   36333 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 11:05:43.296326   36333 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 11:05:43.296330   36333 command_runner.go:130] > # enable_criu_support = false
	I0916 11:05:43.296336   36333 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 11:05:43.296356   36333 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 11:05:43.296366   36333 command_runner.go:130] > # enable_pod_events = false
	I0916 11:05:43.296372   36333 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:05:43.296380   36333 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:05:43.296386   36333 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 11:05:43.296390   36333 command_runner.go:130] > # default_runtime = "runc"
	I0916 11:05:43.296396   36333 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 11:05:43.296405   36333 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 11:05:43.296416   36333 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 11:05:43.296421   36333 command_runner.go:130] > # creation as a file is not desired either.
	I0916 11:05:43.296431   36333 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 11:05:43.296436   36333 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 11:05:43.296441   36333 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 11:05:43.296444   36333 command_runner.go:130] > # ]
	I0916 11:05:43.296450   36333 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 11:05:43.296458   36333 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 11:05:43.296464   36333 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 11:05:43.296471   36333 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 11:05:43.296474   36333 command_runner.go:130] > #
	I0916 11:05:43.296481   36333 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 11:05:43.296486   36333 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 11:05:43.296508   36333 command_runner.go:130] > # runtime_type = "oci"
	I0916 11:05:43.296513   36333 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 11:05:43.296517   36333 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 11:05:43.296522   36333 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 11:05:43.296526   36333 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 11:05:43.296530   36333 command_runner.go:130] > # monitor_env = []
	I0916 11:05:43.296534   36333 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 11:05:43.296538   36333 command_runner.go:130] > # allowed_annotations = []
	I0916 11:05:43.296543   36333 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 11:05:43.296546   36333 command_runner.go:130] > # Where:
	I0916 11:05:43.296550   36333 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 11:05:43.296556   36333 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 11:05:43.296561   36333 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 11:05:43.296567   36333 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 11:05:43.296570   36333 command_runner.go:130] > #   in $PATH.
	I0916 11:05:43.296576   36333 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 11:05:43.296580   36333 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 11:05:43.296586   36333 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 11:05:43.296589   36333 command_runner.go:130] > #   state.
	I0916 11:05:43.296595   36333 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 11:05:43.296600   36333 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 11:05:43.296606   36333 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 11:05:43.296611   36333 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 11:05:43.296618   36333 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 11:05:43.296624   36333 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 11:05:43.296628   36333 command_runner.go:130] > #   The currently recognized values are:
	I0916 11:05:43.296633   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 11:05:43.296640   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 11:05:43.296645   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 11:05:43.296650   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 11:05:43.296657   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 11:05:43.296662   36333 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 11:05:43.296668   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 11:05:43.296673   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 11:05:43.296683   36333 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 11:05:43.296689   36333 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 11:05:43.296696   36333 command_runner.go:130] > #   deprecated option "conmon".
	I0916 11:05:43.296703   36333 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 11:05:43.296711   36333 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 11:05:43.296717   36333 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 11:05:43.296724   36333 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 11:05:43.296731   36333 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 11:05:43.296741   36333 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 11:05:43.296751   36333 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 11:05:43.296762   36333 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 11:05:43.296770   36333 command_runner.go:130] > #
	I0916 11:05:43.296776   36333 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 11:05:43.296784   36333 command_runner.go:130] > #
	I0916 11:05:43.296793   36333 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 11:05:43.296805   36333 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 11:05:43.296812   36333 command_runner.go:130] > #
	I0916 11:05:43.296819   36333 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 11:05:43.296827   36333 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 11:05:43.296830   36333 command_runner.go:130] > #
	I0916 11:05:43.296836   36333 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 11:05:43.296842   36333 command_runner.go:130] > # feature.
	I0916 11:05:43.296846   36333 command_runner.go:130] > #
	I0916 11:05:43.296851   36333 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 11:05:43.296860   36333 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 11:05:43.296869   36333 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 11:05:43.296877   36333 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 11:05:43.296883   36333 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 11:05:43.296889   36333 command_runner.go:130] > #
	I0916 11:05:43.296894   36333 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 11:05:43.296902   36333 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 11:05:43.296905   36333 command_runner.go:130] > #
	I0916 11:05:43.296911   36333 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 11:05:43.296917   36333 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 11:05:43.296920   36333 command_runner.go:130] > #
	I0916 11:05:43.296926   36333 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 11:05:43.296934   36333 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 11:05:43.296941   36333 command_runner.go:130] > # limitation.
	I0916 11:05:43.296949   36333 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 11:05:43.296954   36333 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 11:05:43.296959   36333 command_runner.go:130] > runtime_type = "oci"
	I0916 11:05:43.296964   36333 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 11:05:43.296968   36333 command_runner.go:130] > runtime_config_path = ""
	I0916 11:05:43.296972   36333 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 11:05:43.296976   36333 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 11:05:43.296980   36333 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 11:05:43.296983   36333 command_runner.go:130] > monitor_env = [
	I0916 11:05:43.296989   36333 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:05:43.296994   36333 command_runner.go:130] > ]
	I0916 11:05:43.296999   36333 command_runner.go:130] > privileged_without_host_devices = false
	I0916 11:05:43.297008   36333 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 11:05:43.297013   36333 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 11:05:43.297020   36333 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 11:05:43.297028   36333 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 11:05:43.297037   36333 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 11:05:43.297043   36333 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 11:05:43.297054   36333 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 11:05:43.297064   36333 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 11:05:43.297069   36333 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 11:05:43.297078   36333 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 11:05:43.297082   36333 command_runner.go:130] > # Example:
	I0916 11:05:43.297086   36333 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 11:05:43.297091   36333 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 11:05:43.297099   36333 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 11:05:43.297104   36333 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 11:05:43.297109   36333 command_runner.go:130] > # cpuset = 0
	I0916 11:05:43.297113   36333 command_runner.go:130] > # cpushares = "0-1"
	I0916 11:05:43.297117   36333 command_runner.go:130] > # Where:
	I0916 11:05:43.297122   36333 command_runner.go:130] > # The workload name is workload-type.
	I0916 11:05:43.297148   36333 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 11:05:43.297158   36333 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 11:05:43.297163   36333 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 11:05:43.297173   36333 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 11:05:43.297179   36333 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 11:05:43.297186   36333 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 11:05:43.297195   36333 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 11:05:43.297201   36333 command_runner.go:130] > # Default value is set to true
	I0916 11:05:43.297206   36333 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 11:05:43.297213   36333 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 11:05:43.297218   36333 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 11:05:43.297225   36333 command_runner.go:130] > # Default value is set to 'false'
	I0916 11:05:43.297229   36333 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 11:05:43.297235   36333 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 11:05:43.297240   36333 command_runner.go:130] > #
	I0916 11:05:43.297246   36333 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 11:05:43.297252   36333 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 11:05:43.297260   36333 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 11:05:43.297266   36333 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 11:05:43.297273   36333 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 11:05:43.297277   36333 command_runner.go:130] > [crio.image]
	I0916 11:05:43.297285   36333 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 11:05:43.297290   36333 command_runner.go:130] > # default_transport = "docker://"
	I0916 11:05:43.297297   36333 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 11:05:43.297303   36333 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:05:43.297309   36333 command_runner.go:130] > # global_auth_file = ""
	I0916 11:05:43.297315   36333 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 11:05:43.297323   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.297328   36333 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 11:05:43.297336   36333 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 11:05:43.297342   36333 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:05:43.297349   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.297353   36333 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 11:05:43.297361   36333 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 11:05:43.297367   36333 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 11:05:43.297375   36333 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 11:05:43.297381   36333 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 11:05:43.297386   36333 command_runner.go:130] > # pause_command = "/pause"
	I0916 11:05:43.297392   36333 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 11:05:43.297400   36333 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 11:05:43.297405   36333 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 11:05:43.297413   36333 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 11:05:43.297423   36333 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 11:05:43.297451   36333 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 11:05:43.297461   36333 command_runner.go:130] > # pinned_images = [
	I0916 11:05:43.297465   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297474   36333 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 11:05:43.297480   36333 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 11:05:43.297488   36333 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 11:05:43.297493   36333 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 11:05:43.297499   36333 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 11:05:43.297504   36333 command_runner.go:130] > # signature_policy = ""
	I0916 11:05:43.297509   36333 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 11:05:43.297518   36333 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 11:05:43.297524   36333 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 11:05:43.297532   36333 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 11:05:43.297537   36333 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 11:05:43.297544   36333 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 11:05:43.297551   36333 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 11:05:43.297559   36333 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 11:05:43.297563   36333 command_runner.go:130] > # changing them here.
	I0916 11:05:43.297571   36333 command_runner.go:130] > # insecure_registries = [
	I0916 11:05:43.297574   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297580   36333 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 11:05:43.297587   36333 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 11:05:43.297591   36333 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 11:05:43.297599   36333 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 11:05:43.297604   36333 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 11:05:43.297611   36333 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 11:05:43.297615   36333 command_runner.go:130] > # CNI plugins.
	I0916 11:05:43.297621   36333 command_runner.go:130] > [crio.network]
	I0916 11:05:43.297627   36333 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 11:05:43.297634   36333 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 11:05:43.297638   36333 command_runner.go:130] > # cni_default_network = ""
	I0916 11:05:43.297646   36333 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 11:05:43.297650   36333 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 11:05:43.297656   36333 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 11:05:43.297660   36333 command_runner.go:130] > # plugin_dirs = [
	I0916 11:05:43.297664   36333 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 11:05:43.297669   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297674   36333 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 11:05:43.297681   36333 command_runner.go:130] > [crio.metrics]
	I0916 11:05:43.297688   36333 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 11:05:43.297692   36333 command_runner.go:130] > enable_metrics = true
	I0916 11:05:43.297698   36333 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 11:05:43.297703   36333 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 11:05:43.297712   36333 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 11:05:43.297718   36333 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 11:05:43.297725   36333 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 11:05:43.297733   36333 command_runner.go:130] > # metrics_collectors = [
	I0916 11:05:43.297742   36333 command_runner.go:130] > # 	"operations",
	I0916 11:05:43.297750   36333 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 11:05:43.297759   36333 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 11:05:43.297765   36333 command_runner.go:130] > # 	"operations_errors",
	I0916 11:05:43.297774   36333 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 11:05:43.297781   36333 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 11:05:43.297790   36333 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 11:05:43.297797   36333 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 11:05:43.297805   36333 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 11:05:43.297813   36333 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 11:05:43.297822   36333 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 11:05:43.297829   36333 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 11:05:43.297838   36333 command_runner.go:130] > # 	"containers_oom_total",
	I0916 11:05:43.297842   36333 command_runner.go:130] > # 	"containers_oom",
	I0916 11:05:43.297847   36333 command_runner.go:130] > # 	"processes_defunct",
	I0916 11:05:43.297850   36333 command_runner.go:130] > # 	"operations_total",
	I0916 11:05:43.297855   36333 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 11:05:43.297859   36333 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 11:05:43.297869   36333 command_runner.go:130] > # 	"operations_errors_total",
	I0916 11:05:43.297873   36333 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 11:05:43.297881   36333 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 11:05:43.297885   36333 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 11:05:43.297893   36333 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 11:05:43.297897   36333 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 11:05:43.297904   36333 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 11:05:43.297909   36333 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 11:05:43.297913   36333 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 11:05:43.297918   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297923   36333 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 11:05:43.297929   36333 command_runner.go:130] > # metrics_port = 9090
	I0916 11:05:43.297934   36333 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 11:05:43.297939   36333 command_runner.go:130] > # metrics_socket = ""
	I0916 11:05:43.297944   36333 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 11:05:43.297952   36333 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 11:05:43.297959   36333 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 11:05:43.297966   36333 command_runner.go:130] > # certificate on any modification event.
	I0916 11:05:43.297973   36333 command_runner.go:130] > # metrics_cert = ""
	I0916 11:05:43.297981   36333 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 11:05:43.297986   36333 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 11:05:43.297990   36333 command_runner.go:130] > # metrics_key = ""
	I0916 11:05:43.297996   36333 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 11:05:43.298002   36333 command_runner.go:130] > [crio.tracing]
	I0916 11:05:43.298008   36333 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 11:05:43.298014   36333 command_runner.go:130] > # enable_tracing = false
	I0916 11:05:43.298019   36333 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 11:05:43.298025   36333 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 11:05:43.298033   36333 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 11:05:43.298039   36333 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 11:05:43.298044   36333 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 11:05:43.298048   36333 command_runner.go:130] > [crio.nri]
	I0916 11:05:43.298053   36333 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 11:05:43.298056   36333 command_runner.go:130] > # enable_nri = false
	I0916 11:05:43.298062   36333 command_runner.go:130] > # NRI socket to listen on.
	I0916 11:05:43.298066   36333 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 11:05:43.298071   36333 command_runner.go:130] > # NRI plugin directory to use.
	I0916 11:05:43.298078   36333 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 11:05:43.298083   36333 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 11:05:43.298087   36333 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 11:05:43.298095   36333 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 11:05:43.298099   36333 command_runner.go:130] > # nri_disable_connections = false
	I0916 11:05:43.298104   36333 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 11:05:43.298111   36333 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 11:05:43.298115   36333 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 11:05:43.298122   36333 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 11:05:43.298128   36333 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 11:05:43.298133   36333 command_runner.go:130] > [crio.stats]
	I0916 11:05:43.298139   36333 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 11:05:43.298144   36333 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 11:05:43.298150   36333 command_runner.go:130] > # stats_collection_period = 0
	I0916 11:05:43.298215   36333 cni.go:84] Creating CNI manager for ""
	I0916 11:05:43.298228   36333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 11:05:43.298236   36333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:05:43.298254   36333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-736061 NodeName:multinode-736061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:05:43.298407   36333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-736061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:05:43.298467   36333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:05:43.308375   36333 command_runner.go:130] > kubeadm
	I0916 11:05:43.308392   36333 command_runner.go:130] > kubectl
	I0916 11:05:43.308396   36333 command_runner.go:130] > kubelet
	I0916 11:05:43.308508   36333 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:05:43.308570   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:05:43.318000   36333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:05:43.334695   36333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:05:43.350760   36333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 11:05:43.366756   36333 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:05:43.370611   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:05:43.382490   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:05:43.510558   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:05:43.528417   36333 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.32
	I0916 11:05:43.528444   36333 certs.go:194] generating shared ca certs ...
	I0916 11:05:43.528466   36333 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.528645   36333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:05:43.528700   36333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:05:43.528713   36333 certs.go:256] generating profile certs ...
	I0916 11:05:43.528800   36333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key
	I0916 11:05:43.528826   36333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt with IP's: []
	I0916 11:05:43.729416   36333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt ...
	I0916 11:05:43.729446   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt: {Name:mk8f058dbeacc08c17d1e4d4c54c153a31a8caee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.729636   36333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key ...
	I0916 11:05:43.729650   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key: {Name:mkc3de41a13f2c6c9c924ff3cb124609a6d349f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.729767   36333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7
	I0916 11:05:43.729783   36333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.32]
	I0916 11:05:43.861692   36333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7 ...
	I0916 11:05:43.861719   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7: {Name:mk3e4089705238a6c72c6f29c7550cbd35936edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.861904   36333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7 ...
	I0916 11:05:43.861919   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7: {Name:mkad0f3937bad034c0343c60b3da1c1794454e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.862010   36333 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt
	I0916 11:05:43.862103   36333 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key
	I0916 11:05:43.862162   36333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key
	I0916 11:05:43.862183   36333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt with IP's: []
	I0916 11:05:44.050019   36333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt ...
	I0916 11:05:44.050048   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt: {Name:mk3b6c74bc98a230d388dd16ad4b67cc884de8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:44.050238   36333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key ...
	I0916 11:05:44.050254   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key: {Name:mkee3aab4cdf8bbb9a371865ef6e113e6462af42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:44.050350   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:05:44.050371   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:05:44.050382   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:05:44.050397   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:05:44.050417   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:05:44.050430   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:05:44.050444   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:05:44.050456   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:05:44.050511   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:05:44.050545   36333 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:05:44.050554   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:05:44.050586   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:05:44.050609   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:05:44.050633   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:05:44.050668   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:05:44.050697   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.050710   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.050722   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.051333   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:05:44.077633   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:05:44.101785   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:05:44.128823   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:05:44.156392   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:05:44.179929   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:05:44.203535   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:05:44.227189   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:05:44.250918   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:05:44.277716   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:05:44.329554   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:05:44.358993   36333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:05:44.377345   36333 ssh_runner.go:195] Run: openssl version
	I0916 11:05:44.383416   36333 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:05:44.383498   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:05:44.396088   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.400714   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.400847   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.400904   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.407039   36333 command_runner.go:130] > 51391683
	I0916 11:05:44.407109   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:05:44.419996   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:05:44.432349   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.436946   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.437139   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.437184   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.442874   36333 command_runner.go:130] > 3ec20f2e
	I0916 11:05:44.442970   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:05:44.453913   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:05:44.464659   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.468904   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.468988   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.469033   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.474667   36333 command_runner.go:130] > b5213941
	I0916 11:05:44.474740   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:05:44.485950   36333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:05:44.490371   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:05:44.490515   36333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:05:44.490566   36333 kubeadm.go:392] StartCluster: {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:05:44.490640   36333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:05:44.490708   36333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:05:44.535122   36333 cri.go:89] found id: ""
	I0916 11:05:44.535203   36333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:05:44.546090   36333 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0916 11:05:44.546125   36333 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0916 11:05:44.546135   36333 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0916 11:05:44.546204   36333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:05:44.556199   36333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:05:44.565563   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0916 11:05:44.565585   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0916 11:05:44.565595   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0916 11:05:44.565604   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:05:44.565755   36333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:05:44.565772   36333 kubeadm.go:157] found existing configuration files:
	
	I0916 11:05:44.565812   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:05:44.574749   36333 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:05:44.575055   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:05:44.575123   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:05:44.585225   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:05:44.594397   36333 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:05:44.594433   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:05:44.594478   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:05:44.603915   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:05:44.613089   36333 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:05:44.613146   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:05:44.613191   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:05:44.622763   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:05:44.631746   36333 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:05:44.631781   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:05:44.631819   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:05:44.641202   36333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 11:05:44.747391   36333 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:05:44.747417   36333 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0916 11:05:44.747516   36333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:05:44.747541   36333 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 11:05:44.862710   36333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:05:44.862744   36333 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:05:44.862861   36333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:05:44.862876   36333 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:05:44.862983   36333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:05:44.863005   36333 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:05:44.877710   36333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:05:44.877748   36333 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:05:44.907323   36333 out.go:235]   - Generating certificates and keys ...
	I0916 11:05:44.907438   36333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:05:44.907468   36333 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0916 11:05:44.907541   36333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:05:44.907552   36333 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0916 11:05:45.035664   36333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:05:45.035693   36333 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:05:45.218565   36333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:05:45.218596   36333 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:05:45.351291   36333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:05:45.351337   36333 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0916 11:05:45.553568   36333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:05:45.553613   36333 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0916 11:05:45.685418   36333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:05:45.685442   36333 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0916 11:05:45.685614   36333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:45.685627   36333 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:45.801840   36333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:05:45.801877   36333 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0916 11:05:45.801985   36333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:45.802012   36333 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:46.076784   36333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:05:46.076815   36333 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:05:46.134172   36333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:05:46.134194   36333 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:05:46.325794   36333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:05:46.325818   36333 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0916 11:05:46.325935   36333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:05:46.325946   36333 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:05:46.462234   36333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:05:46.462264   36333 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:05:46.727042   36333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:05:46.727083   36333 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:05:46.906186   36333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:05:46.906213   36333 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:05:47.000241   36333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:05:47.000265   36333 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:05:47.248611   36333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:05:47.248639   36333 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:05:47.249247   36333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:05:47.249258   36333 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:05:47.252675   36333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:05:47.252747   36333 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:05:47.254412   36333 out.go:235]   - Booting up control plane ...
	I0916 11:05:47.254521   36333 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:05:47.254538   36333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:05:47.254643   36333 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:05:47.254643   36333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:05:47.255099   36333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:05:47.255121   36333 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:05:47.273458   36333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:05:47.273489   36333 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:05:47.279831   36333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:05:47.279864   36333 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:05:47.279914   36333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:05:47.279927   36333 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 11:05:47.422884   36333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:05:47.422909   36333 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:05:47.423022   36333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:05:47.423047   36333 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:05:47.923943   36333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262524ms
	I0916 11:05:47.923988   36333 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.262524ms
	I0916 11:05:47.924094   36333 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:05:47.924109   36333 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:05:52.922100   36333 kubeadm.go:310] [api-check] The API server is healthy after 5.001231198s
	I0916 11:05:52.922128   36333 command_runner.go:130] > [api-check] The API server is healthy after 5.001231198s
	I0916 11:05:52.933714   36333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:05:52.933741   36333 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:05:52.953998   36333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:05:52.954031   36333 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:05:52.985743   36333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:05:52.985770   36333 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:05:52.985983   36333 kubeadm.go:310] [mark-control-plane] Marking the node multinode-736061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:05:52.985998   36333 command_runner.go:130] > [mark-control-plane] Marking the node multinode-736061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:05:52.999952   36333 kubeadm.go:310] [bootstrap-token] Using token: tyssfx.qcouw8my23ympzkv
	I0916 11:05:53.000087   36333 command_runner.go:130] > [bootstrap-token] Using token: tyssfx.qcouw8my23ympzkv
	I0916 11:05:53.001595   36333 out.go:235]   - Configuring RBAC rules ...
	I0916 11:05:53.001758   36333 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:05:53.001785   36333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:05:53.012385   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:05:53.012416   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:05:53.022559   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:05:53.022587   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:05:53.028226   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:05:53.028231   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:05:53.035375   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:05:53.035407   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:05:53.040052   36333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:05:53.040069   36333 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:05:53.328732   36333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:05:53.328767   36333 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:05:53.754122   36333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:05:53.754158   36333 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0916 11:05:54.327404   36333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:05:54.327432   36333 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0916 11:05:54.328424   36333 kubeadm.go:310] 
	I0916 11:05:54.328532   36333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:05:54.328552   36333 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0916 11:05:54.328557   36333 kubeadm.go:310] 
	I0916 11:05:54.328657   36333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:05:54.328664   36333 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0916 11:05:54.328681   36333 kubeadm.go:310] 
	I0916 11:05:54.328719   36333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:05:54.328729   36333 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0916 11:05:54.328780   36333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:05:54.328787   36333 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:05:54.328835   36333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:05:54.328860   36333 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:05:54.328866   36333 kubeadm.go:310] 
	I0916 11:05:54.328929   36333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:05:54.328937   36333 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0916 11:05:54.328941   36333 kubeadm.go:310] 
	I0916 11:05:54.329001   36333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:05:54.329009   36333 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:05:54.329012   36333 kubeadm.go:310] 
	I0916 11:05:54.329055   36333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:05:54.329061   36333 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0916 11:05:54.329136   36333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:05:54.329154   36333 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:05:54.329260   36333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:05:54.329274   36333 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:05:54.329277   36333 kubeadm.go:310] 
	I0916 11:05:54.329353   36333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:05:54.329361   36333 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:05:54.329449   36333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:05:54.329457   36333 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0916 11:05:54.329462   36333 kubeadm.go:310] 
	I0916 11:05:54.329568   36333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.329580   36333 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.329726   36333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 11:05:54.329736   36333 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 11:05:54.329758   36333 kubeadm.go:310] 	--control-plane 
	I0916 11:05:54.329764   36333 command_runner.go:130] > 	--control-plane 
	I0916 11:05:54.329767   36333 kubeadm.go:310] 
	I0916 11:05:54.329843   36333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:05:54.329850   36333 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:05:54.329853   36333 kubeadm.go:310] 
	I0916 11:05:54.329968   36333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.329971   36333 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.330130   36333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:05:54.330143   36333 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:05:54.330941   36333 kubeadm.go:310] W0916 11:05:44.723101     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.330957   36333 command_runner.go:130] ! W0916 11:05:44.723101     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.331226   36333 kubeadm.go:310] W0916 11:05:44.725335     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.331228   36333 command_runner.go:130] ! W0916 11:05:44.725335     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.331382   36333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:05:54.331396   36333 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:05:54.331417   36333 cni.go:84] Creating CNI manager for ""
	I0916 11:05:54.331427   36333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 11:05:54.333262   36333 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:05:54.334526   36333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:05:54.340284   36333 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0916 11:05:54.340308   36333 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0916 11:05:54.340317   36333 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0916 11:05:54.340327   36333 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:05:54.340337   36333 command_runner.go:130] > Access: 2024-09-16 11:05:26.103603942 +0000
	I0916 11:05:54.340345   36333 command_runner.go:130] > Modify: 2024-09-15 21:28:20.000000000 +0000
	I0916 11:05:54.340355   36333 command_runner.go:130] > Change: 2024-09-16 11:05:25.044603942 +0000
	I0916 11:05:54.340361   36333 command_runner.go:130] >  Birth: -
	I0916 11:05:54.340599   36333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:05:54.340617   36333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:05:54.360934   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:05:54.710752   36333 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0916 11:05:54.716515   36333 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0916 11:05:54.726048   36333 command_runner.go:130] > serviceaccount/kindnet created
	I0916 11:05:54.751499   36333 command_runner.go:130] > daemonset.apps/kindnet created
	I0916 11:05:54.753798   36333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:05:54.753870   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:54.753926   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-736061 minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-736061 minikube.k8s.io/primary=true
	I0916 11:05:54.939817   36333 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0916 11:05:54.941588   36333 command_runner.go:130] > -16
	I0916 11:05:54.941628   36333 ops.go:34] apiserver oom_adj: -16
	I0916 11:05:54.941660   36333 command_runner.go:130] > node/multinode-736061 labeled
	I0916 11:05:54.941717   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:55.027489   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:55.442711   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:55.524147   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:55.942606   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:56.021989   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:56.442339   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:56.534060   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:56.942115   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:57.041821   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:57.442012   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:57.523969   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:57.942694   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:58.040196   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:58.442038   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:58.522384   36333 command_runner.go:130] > NAME      SECRETS   AGE
	I0916 11:05:58.522412   36333 command_runner.go:130] > default   0         0s
	I0916 11:05:58.522441   36333 kubeadm.go:1113] duration metric: took 3.768643152s to wait for elevateKubeSystemPrivileges
	I0916 11:05:58.522464   36333 kubeadm.go:394] duration metric: took 14.031900459s to StartCluster
	I0916 11:05:58.522485   36333 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:58.522567   36333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:58.523262   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:58.523525   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:05:58.523520   36333 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:05:58.523543   36333 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:05:58.523619   36333 addons.go:69] Setting storage-provisioner=true in profile "multinode-736061"
	I0916 11:05:58.523647   36333 addons.go:234] Setting addon storage-provisioner=true in "multinode-736061"
	I0916 11:05:58.523673   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:05:58.523693   36333 addons.go:69] Setting default-storageclass=true in profile "multinode-736061"
	I0916 11:05:58.523717   36333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-736061"
	I0916 11:05:58.523734   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:05:58.524218   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.524240   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.524262   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.524281   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.525936   36333 out.go:177] * Verifying Kubernetes components...
	I0916 11:05:58.527179   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:05:58.539793   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
	I0916 11:05:58.540028   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0916 11:05:58.540272   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.540458   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.540804   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.540823   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.540958   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.540987   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.541195   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.541325   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.541377   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:58.541901   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.541951   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.543606   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:58.544015   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:05:58.544628   36333 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 11:05:58.544973   36333 addons.go:234] Setting addon default-storageclass=true in "multinode-736061"
	I0916 11:05:58.545031   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:05:58.545490   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.545542   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.557074   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0916 11:05:58.557614   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.558120   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.558149   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.558453   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.558657   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:58.560363   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:58.560786   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0916 11:05:58.561244   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.561732   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.561752   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.562052   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.562299   36333 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:05:58.562544   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.562584   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.563649   36333 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:05:58.563664   36333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:05:58.563678   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:58.566428   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.566878   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:58.566899   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.567094   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:58.567244   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:58.567416   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:58.567558   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:58.578318   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0916 11:05:58.578831   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.579328   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.579355   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.579724   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.579916   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:58.581511   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:58.581688   36333 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:05:58.581705   36333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:05:58.581721   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:58.584387   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.584795   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:58.584822   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.585076   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:58.585261   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:58.585414   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:58.585539   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:58.800148   36333 command_runner.go:130] > apiVersion: v1
	I0916 11:05:58.800166   36333 command_runner.go:130] > data:
	I0916 11:05:58.800171   36333 command_runner.go:130] >   Corefile: |
	I0916 11:05:58.800175   36333 command_runner.go:130] >     .:53 {
	I0916 11:05:58.800179   36333 command_runner.go:130] >         errors
	I0916 11:05:58.800189   36333 command_runner.go:130] >         health {
	I0916 11:05:58.800193   36333 command_runner.go:130] >            lameduck 5s
	I0916 11:05:58.800197   36333 command_runner.go:130] >         }
	I0916 11:05:58.800200   36333 command_runner.go:130] >         ready
	I0916 11:05:58.800206   36333 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0916 11:05:58.800210   36333 command_runner.go:130] >            pods insecure
	I0916 11:05:58.800219   36333 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0916 11:05:58.800225   36333 command_runner.go:130] >            ttl 30
	I0916 11:05:58.800229   36333 command_runner.go:130] >         }
	I0916 11:05:58.800235   36333 command_runner.go:130] >         prometheus :9153
	I0916 11:05:58.800241   36333 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0916 11:05:58.800248   36333 command_runner.go:130] >            max_concurrent 1000
	I0916 11:05:58.800252   36333 command_runner.go:130] >         }
	I0916 11:05:58.800256   36333 command_runner.go:130] >         cache 30
	I0916 11:05:58.800261   36333 command_runner.go:130] >         loop
	I0916 11:05:58.800265   36333 command_runner.go:130] >         reload
	I0916 11:05:58.800277   36333 command_runner.go:130] >         loadbalance
	I0916 11:05:58.800283   36333 command_runner.go:130] >     }
	I0916 11:05:58.800287   36333 command_runner.go:130] > kind: ConfigMap
	I0916 11:05:58.800293   36333 command_runner.go:130] > metadata:
	I0916 11:05:58.800299   36333 command_runner.go:130] >   creationTimestamp: "2024-09-16T11:05:53Z"
	I0916 11:05:58.800305   36333 command_runner.go:130] >   name: coredns
	I0916 11:05:58.800309   36333 command_runner.go:130] >   namespace: kube-system
	I0916 11:05:58.800315   36333 command_runner.go:130] >   resourceVersion: "263"
	I0916 11:05:58.800320   36333 command_runner.go:130] >   uid: 4270379f-2cdb-424c-8d1c-8cef3fbc1be2
	I0916 11:05:58.801884   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:05:58.802043   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:05:58.815336   36333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:05:58.891364   36333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:05:59.485098   36333 command_runner.go:130] > configmap/coredns replaced
	I0916 11:05:59.485159   36333 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 11:05:59.485436   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:59.485601   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:59.485674   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:05:59.486489   36333 node_ready.go:35] waiting up to 6m0s for node "multinode-736061" to be "Ready" ...
	I0916 11:05:59.486632   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:05:59.486649   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.486661   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.486666   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.486287   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:05:59.487369   36333 round_trippers.go:463] GET https://192.168.39.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 11:05:59.487380   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.487389   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.487394   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.497532   36333 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 11:05:59.497552   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.497559   36333 round_trippers.go:580]     Audit-Id: 57da509c-6519-4ee3-847d-028f592687fb
	I0916 11:05:59.497564   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.497567   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.497572   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.497576   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.497581   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.497591   36333 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 11:05:59.497612   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.497622   36333 round_trippers.go:580]     Audit-Id: 8e7d26e1-602e-4054-9b7f-2d6446de0b3f
	I0916 11:05:59.497631   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.497637   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.497641   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.497645   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.497649   36333 round_trippers.go:580]     Content-Length: 291
	I0916 11:05:59.497653   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.497678   36333 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"372","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.497678   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:05:59.498175   36333 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"372","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.498237   36333 round_trippers.go:463] PUT https://192.168.39.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 11:05:59.498250   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.498260   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.498267   36333 round_trippers.go:473]     Content-Type: application/json
	I0916 11:05:59.498274   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.514260   36333 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0916 11:05:59.514283   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.514291   36333 round_trippers.go:580]     Content-Length: 291
	I0916 11:05:59.514296   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.514301   36333 round_trippers.go:580]     Audit-Id: f0c4a321-e721-4c80-b252-c799fd24f8a6
	I0916 11:05:59.514305   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.514312   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.514316   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.514324   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.514348   36333 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"375","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.719171   36333 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0916 11:05:59.719211   36333 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0916 11:05:59.719223   36333 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 11:05:59.719234   36333 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 11:05:59.719242   36333 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0916 11:05:59.719250   36333 command_runner.go:130] > pod/storage-provisioner created
	I0916 11:05:59.719332   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719335   36333 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0916 11:05:59.719350   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.719408   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719424   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.719662   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.719680   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.719690   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719696   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.719803   36333 main.go:141] libmachine: (multinode-736061) DBG | Closing plugin on server side
	I0916 11:05:59.719931   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.719941   36333 main.go:141] libmachine: (multinode-736061) DBG | Closing plugin on server side
	I0916 11:05:59.719931   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.719949   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.719959   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.719968   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719980   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.720022   36333 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 11:05:59.720039   36333 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 11:05:59.720121   36333 round_trippers.go:463] GET https://192.168.39.32:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 11:05:59.720133   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.720142   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.720147   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.720272   36333 main.go:141] libmachine: (multinode-736061) DBG | Closing plugin on server side
	I0916 11:05:59.720303   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.720322   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.747501   36333 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0916 11:05:59.747530   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.747541   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.747550   36333 round_trippers.go:580]     Content-Length: 1273
	I0916 11:05:59.747556   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.747562   36333 round_trippers.go:580]     Audit-Id: d7c07b43-28c5-4953-a526-e208840d0bf1
	I0916 11:05:59.747570   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.747575   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.747580   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.747646   36333 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"standard","uid":"2e216119-f9bf-406a-8caf-ccd62e391ad9","resourceVersion":"373","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 11:05:59.748172   36333 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2e216119-f9bf-406a-8caf-ccd62e391ad9","resourceVersion":"373","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 11:05:59.748237   36333 round_trippers.go:463] PUT https://192.168.39.32:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 11:05:59.748251   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.748263   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.748274   36333 round_trippers.go:473]     Content-Type: application/json
	I0916 11:05:59.748277   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.755909   36333 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 11:05:59.755926   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.755933   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.755940   36333 round_trippers.go:580]     Audit-Id: 6fca5072-c17b-4828-b4b1-61318ae38bdd
	I0916 11:05:59.755944   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.755947   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.755950   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.755952   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.755955   36333 round_trippers.go:580]     Content-Length: 1220
	I0916 11:05:59.756344   36333 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2e216119-f9bf-406a-8caf-ccd62e391ad9","resourceVersion":"373","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 11:05:59.756500   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.756513   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.756804   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.756824   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.759532   36333 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:05:59.760887   36333 addons.go:510] duration metric: took 1.237340935s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:05:59.987419   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:05:59.987440   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.987448   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.987451   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.987451   36333 round_trippers.go:463] GET https://192.168.39.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 11:05:59.987466   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.987473   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.987478   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.991451   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:05:59.991468   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.991474   36333 round_trippers.go:580]     Audit-Id: 72acc2e2-b4e5-4697-bf5d-615bfb8f6957
	I0916 11:05:59.991478   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.991482   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.991484   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.991487   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.991491   36333 round_trippers.go:580]     Content-Length: 291
	I0916 11:05:59.991494   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.991526   36333 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"387","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.991621   36333 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:05:59.991635   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.991632   36333 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-736061" context rescaled to 1 replicas
	I0916 11:05:59.991641   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.991649   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.991655   36333 round_trippers.go:580]     Audit-Id: 8446c329-c81e-482c-bae0-8d3c38d2017c
	I0916 11:05:59.991661   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.991665   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.991671   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.992432   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:00.487109   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:00.487132   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:00.487140   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:00.487144   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:00.489258   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:00.489277   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:00.489284   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:00.489288   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:00.489292   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:00.489295   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:00 GMT
	I0916 11:06:00.489299   36333 round_trippers.go:580]     Audit-Id: dfddf752-0d04-4305-ad98-f8a57cb9a8d8
	I0916 11:06:00.489301   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:00.489488   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:00.987071   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:00.987103   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:00.987115   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:00.987122   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:00.989579   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:00.989606   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:00.989615   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:00.989621   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:00.989624   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:00 GMT
	I0916 11:06:00.989628   36333 round_trippers.go:580]     Audit-Id: ef493024-3937-4c6a-bdca-60ec81f985da
	I0916 11:06:00.989632   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:00.989635   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:00.989904   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:01.487632   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:01.487659   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:01.487667   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:01.487672   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:01.490092   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:01.490114   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:01.490124   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:01.490130   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:01.490133   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:01 GMT
	I0916 11:06:01.490138   36333 round_trippers.go:580]     Audit-Id: 5505198a-c028-4145-b51a-cd97c7cec6c4
	I0916 11:06:01.490141   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:01.490146   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:01.490628   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:01.490960   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:01.987316   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:01.987338   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:01.987345   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:01.987351   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:01.989439   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:01.989459   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:01.989466   36333 round_trippers.go:580]     Audit-Id: 817ae4c2-fcb8-4774-9a9c-a78e4be55e5f
	I0916 11:06:01.989470   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:01.989473   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:01.989476   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:01.989478   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:01.989481   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:01 GMT
	I0916 11:06:01.989765   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:02.487551   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:02.487579   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:02.487589   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:02.487594   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:02.489907   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:02.489927   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:02.489933   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:02.489938   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:02 GMT
	I0916 11:06:02.489942   36333 round_trippers.go:580]     Audit-Id: 9285bd1c-7f94-479f-a712-64acc704f792
	I0916 11:06:02.489945   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:02.489947   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:02.489950   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:02.490113   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:02.986767   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:02.986794   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:02.986803   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:02.986810   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:02.989573   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:02.989595   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:02.989604   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:02 GMT
	I0916 11:06:02.989609   36333 round_trippers.go:580]     Audit-Id: 71e7d7d7-c757-4da4-8f35-28d9a1af9890
	I0916 11:06:02.989616   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:02.989619   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:02.989624   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:02.989633   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:02.989888   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:03.487642   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:03.487673   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:03.487682   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:03.487687   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:03.490180   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:03.490204   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:03.490212   36333 round_trippers.go:580]     Audit-Id: 5a86f0df-e279-4a9c-9f33-781c240a2bac
	I0916 11:06:03.490218   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:03.490224   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:03.490230   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:03.490233   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:03.490239   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:03 GMT
	I0916 11:06:03.490474   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:03.986796   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:03.986825   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:03.986835   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:03.986840   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:03.989029   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:03.989053   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:03.989063   36333 round_trippers.go:580]     Audit-Id: f79cc23e-ebd6-44a1-b6b4-cf5372ec80d3
	I0916 11:06:03.989068   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:03.989071   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:03.989076   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:03.989081   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:03.989088   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:03 GMT
	I0916 11:06:03.989489   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:03.989811   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:04.486947   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:04.486971   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:04.486978   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:04.486982   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:04.489465   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:04.489489   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:04.489497   36333 round_trippers.go:580]     Audit-Id: 8519e7fd-5549-46cb-94a0-32291abba761
	I0916 11:06:04.489505   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:04.489510   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:04.489514   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:04.489518   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:04.489522   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:04 GMT
	I0916 11:06:04.489656   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:04.987364   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:04.987388   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:04.987396   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:04.987406   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:04.991734   36333 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:06:04.991753   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:04.991762   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:04.991766   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:04 GMT
	I0916 11:06:04.991771   36333 round_trippers.go:580]     Audit-Id: db5c630f-b6f0-4fba-940f-7996c5ab68cb
	I0916 11:06:04.991774   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:04.991780   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:04.991786   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:04.992082   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:05.486761   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:05.486793   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:05.486805   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:05.486811   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:05.489240   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:05.489265   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:05.489274   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:05.489278   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:05.489282   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:05.489290   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:05.489294   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:05 GMT
	I0916 11:06:05.489302   36333 round_trippers.go:580]     Audit-Id: be5a71c6-457d-461d-9016-5e17f8f04417
	I0916 11:06:05.489636   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:05.987365   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:05.987396   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:05.987406   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:05.987411   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:05.991584   36333 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:06:05.991626   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:05.991636   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:05.991642   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:05.991646   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:05.991651   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:05.991661   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:05 GMT
	I0916 11:06:05.991670   36333 round_trippers.go:580]     Audit-Id: 2787614e-7bd3-4207-a64c-26fcb2f30e01
	I0916 11:06:05.992014   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:05.992326   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:06.486673   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:06.486696   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:06.486704   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:06.486708   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:06.489003   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:06.489022   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:06.489028   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:06 GMT
	I0916 11:06:06.489034   36333 round_trippers.go:580]     Audit-Id: 1bab1392-2e50-4b08-9fcd-126827677cf1
	I0916 11:06:06.489038   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:06.489041   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:06.489044   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:06.489048   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:06.489211   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:06.986897   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:06.986925   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:06.986933   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:06.986938   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:06.989348   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:06.989366   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:06.989373   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:06.989377   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:06 GMT
	I0916 11:06:06.989381   36333 round_trippers.go:580]     Audit-Id: ddc217d2-c0a0-4a60-9d85-6682e01f5be1
	I0916 11:06:06.989383   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:06.989386   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:06.989388   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:06.989805   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:07.487559   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:07.487588   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:07.487599   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:07.487614   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:07.489978   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:07.490000   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:07.490005   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:07.490010   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:07.490013   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:07 GMT
	I0916 11:06:07.490015   36333 round_trippers.go:580]     Audit-Id: 2ff4adf4-0ccb-4cbf-a188-e20d8dcecc95
	I0916 11:06:07.490018   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:07.490021   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:07.490210   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:07.986811   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:07.986837   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:07.986845   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:07.986850   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:07.989482   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:07.989506   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:07.989516   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:07.989522   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:07.989532   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:07.989542   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:07.989547   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:07 GMT
	I0916 11:06:07.989553   36333 round_trippers.go:580]     Audit-Id: be2bd1c2-688f-40fb-9e29-7d4baf1d4654
	I0916 11:06:07.990113   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:08.486779   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:08.486806   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:08.486815   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:08.486819   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:08.489051   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:08.489074   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:08.489083   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:08.489089   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:08.489094   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:08.489102   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:08 GMT
	I0916 11:06:08.489106   36333 round_trippers.go:580]     Audit-Id: 8519d36f-a62e-45e3-b8ae-d90629f2435e
	I0916 11:06:08.489112   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:08.489268   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:08.489650   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:08.987730   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:08.987762   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:08.987771   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:08.987777   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:08.989978   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:08.989996   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:08.990002   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:08.990006   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:08.990009   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:08.990012   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:08.990015   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:08 GMT
	I0916 11:06:08.990018   36333 round_trippers.go:580]     Audit-Id: 37ec693c-03a0-4b67-82e2-a82071c8839b
	I0916 11:06:08.990204   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:09.487593   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:09.487619   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:09.487642   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:09.487648   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:09.490067   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:09.490092   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:09.490100   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:09 GMT
	I0916 11:06:09.490104   36333 round_trippers.go:580]     Audit-Id: 4c864097-2ec0-4fbc-9956-643a33be7206
	I0916 11:06:09.490108   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:09.490111   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:09.490114   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:09.490118   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:09.490462   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:09.987129   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:09.987160   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:09.987171   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:09.987178   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:09.989467   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:09.989489   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:09.989498   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:09.989503   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:09.989507   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:09 GMT
	I0916 11:06:09.989514   36333 round_trippers.go:580]     Audit-Id: f65689f7-43c5-4f3f-b7a6-a00b0ad3eb56
	I0916 11:06:09.989521   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:09.989525   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:09.989689   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:10.487422   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:10.487449   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:10.487457   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:10.487461   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:10.489902   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:10.489920   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:10.489928   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:10.489936   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:10 GMT
	I0916 11:06:10.489940   36333 round_trippers.go:580]     Audit-Id: e8ae3f12-9374-47bc-af7a-bb0bac3b25f9
	I0916 11:06:10.489944   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:10.489949   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:10.489953   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:10.490170   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:10.490454   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:10.986819   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:10.986852   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:10.986861   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:10.986865   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:10.989440   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:10.989457   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:10.989464   36333 round_trippers.go:580]     Audit-Id: 45e2366e-a502-415d-b492-6bb591954121
	I0916 11:06:10.989468   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:10.989472   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:10.989476   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:10.989480   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:10.989488   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:10 GMT
	I0916 11:06:10.990172   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:11.486907   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:11.486941   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.486955   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.486964   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.490017   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:11.490035   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.490041   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.490046   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.490049   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.490051   36333 round_trippers.go:580]     Audit-Id: 62ab62ba-1739-48b3-bcf8-48caad8af385
	I0916 11:06:11.490055   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.490058   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.490755   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:11.491049   36333 node_ready.go:49] node "multinode-736061" has status "Ready":"True"
	I0916 11:06:11.491063   36333 node_ready.go:38] duration metric: took 12.004548904s for node "multinode-736061" to be "Ready" ...
	I0916 11:06:11.491072   36333 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:06:11.491138   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:11.491147   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.491154   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.491158   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.493315   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:11.493335   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.493342   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.493346   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.493349   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.493353   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.493357   36333 round_trippers.go:580]     Audit-Id: bd30fb1e-42bd-4dda-accc-b9da7a7ad04b
	I0916 11:06:11.493360   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.493979   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57501 chars]
	I0916 11:06:11.497065   36333 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:11.497169   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:11.497181   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.497191   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.497198   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.499057   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:11.499070   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.499078   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.499085   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.499091   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.499098   36333 round_trippers.go:580]     Audit-Id: fbfd6109-2244-44f1-9709-4db413215efa
	I0916 11:06:11.499104   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.499111   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.499203   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0916 11:06:11.499591   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:11.499615   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.499625   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.499629   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.501696   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:11.501710   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.501715   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.501718   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.501722   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.501724   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.501727   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.501729   36333 round_trippers.go:580]     Audit-Id: b7f7678f-fb03-4490-a23f-41da8d8fe3fd
	I0916 11:06:11.501962   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:11.997306   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:11.997333   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.997345   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.997354   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.001267   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:12.001290   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.001299   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:12.001304   36333 round_trippers.go:580]     Audit-Id: 049ef24a-21b9-42dd-9fee-1ec9b1c03c77
	I0916 11:06:12.001309   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.001313   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.001317   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.001327   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.001498   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0916 11:06:12.002094   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:12.002112   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.002120   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.002124   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.007438   36333 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 11:06:12.007461   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.007468   36333 round_trippers.go:580]     Audit-Id: fb6acf8e-2372-4779-a174-75af486fc8ae
	I0916 11:06:12.007472   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.007480   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.007485   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.007488   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.007492   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.007582   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:12.498214   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:12.498244   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.498257   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.498261   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.500943   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:12.500961   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.500968   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.500971   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.500975   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.500977   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.500980   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.500983   36333 round_trippers.go:580]     Audit-Id: 3716abf9-4d97-44ee-b835-716d148db32d
	I0916 11:06:12.501217   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0916 11:06:12.501785   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:12.501802   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.501813   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.501818   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.503724   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:12.503738   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.503744   36333 round_trippers.go:580]     Audit-Id: 05cb1d94-303d-4e78-b840-d91db92bdbdb
	I0916 11:06:12.503748   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.503750   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.503753   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.503757   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.503760   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.504032   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:12.997667   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:12.997689   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.997699   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.997702   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.999433   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:12.999449   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.999455   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.999459   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.999462   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.999465   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.999469   36333 round_trippers.go:580]     Audit-Id: bae48126-c167-4933-b736-5a674299dc82
	I0916 11:06:12.999471   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.999842   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6776 chars]
	I0916 11:06:13.000296   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.000310   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.000317   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.000321   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.002289   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.002305   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.002311   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.002316   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.002322   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.002328   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:13.002333   36333 round_trippers.go:580]     Audit-Id: ec4a0dd6-b0a0-4b65-aa99-ccddecb9886d
	I0916 11:06:13.002337   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.002439   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.002722   36333 pod_ready.go:93] pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.002736   36333 pod_ready.go:82] duration metric: took 1.505648574s for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.002744   36333 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.002789   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-736061
	I0916 11:06:13.002797   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.002803   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.002806   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.004330   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.004344   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.004360   36333 round_trippers.go:580]     Audit-Id: 9c977c1a-2111-4f2d-b73b-88dc2584b240
	I0916 11:06:13.004367   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.004370   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.004374   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.004378   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.004382   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:13.004780   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-736061","namespace":"kube-system","uid":"f946773c-a82f-4e7e-8148-a81b41b27fa9","resourceVersion":"411","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.32:2379","kubernetes.io/config.hash":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.mirror":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.seen":"2024-09-16T11:05:53.622995492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6418 chars]
	I0916 11:06:13.005178   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.005191   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.005198   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.005203   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.006652   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.006661   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.006667   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.006670   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.006673   36333 round_trippers.go:580]     Audit-Id: 5964a6f7-0e23-4a78-ab26-740b9efba3f0
	I0916 11:06:13.006676   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.006679   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.006681   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.007099   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.007382   36333 pod_ready.go:93] pod "etcd-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.007398   36333 pod_ready.go:82] duration metric: took 4.649318ms for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.007409   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.007451   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-736061
	I0916 11:06:13.007458   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.007465   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.007469   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.009054   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.009069   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.009077   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.009084   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.009087   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.009093   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.009104   36333 round_trippers.go:580]     Audit-Id: 17f2d678-e228-472c-adfb-1a1d6ff375ff
	I0916 11:06:13.009108   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.009327   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-736061","namespace":"kube-system","uid":"bb6b837b-db0a-455d-8055-ec513f470220","resourceVersion":"408","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.32:8443","kubernetes.io/config.hash":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.mirror":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.seen":"2024-09-16T11:05:53.622989337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7637 chars]
	I0916 11:06:13.009756   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.009772   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.009779   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.009782   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.011049   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.011060   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.011066   36333 round_trippers.go:580]     Audit-Id: 92ce6c5b-7c7f-4792-be66-2f0cfa85c88d
	I0916 11:06:13.011070   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.011073   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.011075   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.011077   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.011080   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.011229   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.011534   36333 pod_ready.go:93] pod "kube-apiserver-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.011547   36333 pod_ready.go:82] duration metric: took 4.132838ms for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.011555   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.011607   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-736061
	I0916 11:06:13.011616   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.011622   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.011626   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.012998   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.013015   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.013024   36333 round_trippers.go:580]     Audit-Id: a4ba53c2-6b2a-4c94-af95-040a6fb841fa
	I0916 11:06:13.013031   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.013035   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.013039   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.013043   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.013046   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.013346   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-736061","namespace":"kube-system","uid":"53bb4e69-605c-4160-bf0a-f26e83e16ab1","resourceVersion":"412","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.mirror":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.seen":"2024-09-16T11:05:53.622993259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7198 chars]
	I0916 11:06:13.013794   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.013810   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.013820   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.013826   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.015589   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.015604   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.015613   36333 round_trippers.go:580]     Audit-Id: 11b44286-7359-4aa9-86a4-95c383baef42
	I0916 11:06:13.015618   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.015622   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.015634   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.015641   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.015647   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.016085   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.016377   36333 pod_ready.go:93] pod "kube-controller-manager-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.016393   36333 pod_ready.go:82] duration metric: took 4.831092ms for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.016405   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.016457   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftj9p
	I0916 11:06:13.016465   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.016474   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.016482   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.017876   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.017892   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.017900   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.017904   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.017911   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.017916   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.017923   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.017929   36333 round_trippers.go:580]     Audit-Id: 6d42ae98-5d4f-4e69-b809-f90328681ea8
	I0916 11:06:13.018065   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ftj9p","generateName":"kube-proxy-","namespace":"kube-system","uid":"fa72720f-1c4a-46a2-a733-f411ccb6f628","resourceVersion":"398","creationTimestamp":"2024-09-16T11:05:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"562d5386-4fc3-48d5-983a-19cdfbbddc77","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562d5386-4fc3-48d5-983a-19cdfbbddc77\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6141 chars]
	I0916 11:06:13.087776   36333 request.go:632] Waited for 69.276696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.087866   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.087871   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.087878   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.087881   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.090335   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.090354   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.090360   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.090365   36333 round_trippers.go:580]     Audit-Id: 5cc20204-b636-4a60-9bd6-04d8b3098a2e
	I0916 11:06:13.090369   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.090375   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.090380   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.090386   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.090571   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.090955   36333 pod_ready.go:93] pod "kube-proxy-ftj9p" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.090975   36333 pod_ready.go:82] duration metric: took 74.562561ms for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.090984   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.287430   36333 request.go:632] Waited for 196.359065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:06:13.287488   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:06:13.287493   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.287501   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.287505   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.289939   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.289961   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.289971   36333 round_trippers.go:580]     Audit-Id: 66efc13e-c84c-41a4-8eab-cbe270f52f0e
	I0916 11:06:13.289977   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.289981   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.289985   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.289990   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.289994   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.290318   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-736061","namespace":"kube-system","uid":"25a9a3ee-f264-4bd2-95fc-c8452bedc92b","resourceVersion":"413","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.mirror":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.seen":"2024-09-16T11:05:47.723827022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4937 chars]
	I0916 11:06:13.486996   36333 request.go:632] Waited for 196.307844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.487064   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.487070   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.487092   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.487097   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.489715   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.489738   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.489747   36333 round_trippers.go:580]     Audit-Id: 336fca95-58b8-4e6c-b84b-042526fc9fbe
	I0916 11:06:13.489752   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.489757   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.489764   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.489768   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.489772   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.490442   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.490831   36333 pod_ready.go:93] pod "kube-scheduler-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.490850   36333 pod_ready.go:82] duration metric: took 399.858732ms for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.490860   36333 pod_ready.go:39] duration metric: took 1.999774525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:06:13.490882   36333 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:06:13.490931   36333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:06:13.505992   36333 command_runner.go:130] > 1055
	I0916 11:06:13.506064   36333 api_server.go:72] duration metric: took 14.982447147s to wait for apiserver process to appear ...
	I0916 11:06:13.506079   36333 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:06:13.506096   36333 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0916 11:06:13.510743   36333 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0916 11:06:13.510820   36333 round_trippers.go:463] GET https://192.168.39.32:8443/version
	I0916 11:06:13.510832   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.510842   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.510846   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.511687   36333 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 11:06:13.511703   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.511710   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.511714   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.511717   36333 round_trippers.go:580]     Content-Length: 263
	I0916 11:06:13.511721   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.511724   36333 round_trippers.go:580]     Audit-Id: dddcaee8-dc5a-43b4-bbb7-4446c3ea6dd4
	I0916 11:06:13.511726   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.511729   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.511761   36333 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 11:06:13.511845   36333 api_server.go:141] control plane version: v1.31.1
	I0916 11:06:13.511863   36333 api_server.go:131] duration metric: took 5.778245ms to wait for apiserver health ...
	I0916 11:06:13.511870   36333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:06:13.687271   36333 request.go:632] Waited for 175.343496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:13.687351   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:13.687359   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.687369   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.687378   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.691059   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:13.691080   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.691088   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.691094   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.691099   36333 round_trippers.go:580]     Audit-Id: d5f1331e-ee18-4472-abaf-4ce39ab3590e
	I0916 11:06:13.691104   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.691108   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.691113   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.692310   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57491 chars]
	I0916 11:06:13.694007   36333 system_pods.go:59] 8 kube-system pods found
	I0916 11:06:13.694031   36333 system_pods.go:61] "coredns-7c65d6cfc9-nlhl2" [6ea84b9d-f364-4e26-8dc8-44c3b4d92417] Running
	I0916 11:06:13.694036   36333 system_pods.go:61] "etcd-multinode-736061" [f946773c-a82f-4e7e-8148-a81b41b27fa9] Running
	I0916 11:06:13.694040   36333 system_pods.go:61] "kindnet-qb4tq" [933f0749-7868-4e96-9b8e-67005545bbc5] Running
	I0916 11:06:13.694043   36333 system_pods.go:61] "kube-apiserver-multinode-736061" [bb6b837b-db0a-455d-8055-ec513f470220] Running
	I0916 11:06:13.694048   36333 system_pods.go:61] "kube-controller-manager-multinode-736061" [53bb4e69-605c-4160-bf0a-f26e83e16ab1] Running
	I0916 11:06:13.694051   36333 system_pods.go:61] "kube-proxy-ftj9p" [fa72720f-1c4a-46a2-a733-f411ccb6f628] Running
	I0916 11:06:13.694054   36333 system_pods.go:61] "kube-scheduler-multinode-736061" [25a9a3ee-f264-4bd2-95fc-c8452bedc92b] Running
	I0916 11:06:13.694057   36333 system_pods.go:61] "storage-provisioner" [5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534] Running
	I0916 11:06:13.694062   36333 system_pods.go:74] duration metric: took 182.187944ms to wait for pod list to return data ...
	I0916 11:06:13.694070   36333 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:06:13.887530   36333 request.go:632] Waited for 193.387272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/default/serviceaccounts
	I0916 11:06:13.887624   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/default/serviceaccounts
	I0916 11:06:13.887631   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.887642   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.887650   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.890587   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.890607   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.890613   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.890617   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.890620   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.890623   36333 round_trippers.go:580]     Content-Length: 261
	I0916 11:06:13.890626   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.890629   36333 round_trippers.go:580]     Audit-Id: dc704cc4-052e-4cd8-9722-add36ef0ebcf
	I0916 11:06:13.890632   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.890649   36333 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a7fd93be-2448-40ec-9a95-a7af11f4c24b","resourceVersion":"329","creationTimestamp":"2024-09-16T11:05:58Z"}}]}
	I0916 11:06:13.890920   36333 default_sa.go:45] found service account: "default"
	I0916 11:06:13.890938   36333 default_sa.go:55] duration metric: took 196.864556ms for default service account to be created ...
	I0916 11:06:13.890947   36333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:06:14.087112   36333 request.go:632] Waited for 196.092263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:14.087172   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:14.087178   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:14.087186   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:14.087190   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:14.090366   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:14.090395   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:14.090405   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:14 GMT
	I0916 11:06:14.090412   36333 round_trippers.go:580]     Audit-Id: ee138c8d-af1f-4dd1-88d7-5905f846a48a
	I0916 11:06:14.090418   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:14.090423   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:14.090432   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:14.090438   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:14.091044   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57491 chars]
	I0916 11:06:14.092723   36333 system_pods.go:86] 8 kube-system pods found
	I0916 11:06:14.092742   36333 system_pods.go:89] "coredns-7c65d6cfc9-nlhl2" [6ea84b9d-f364-4e26-8dc8-44c3b4d92417] Running
	I0916 11:06:14.092747   36333 system_pods.go:89] "etcd-multinode-736061" [f946773c-a82f-4e7e-8148-a81b41b27fa9] Running
	I0916 11:06:14.092751   36333 system_pods.go:89] "kindnet-qb4tq" [933f0749-7868-4e96-9b8e-67005545bbc5] Running
	I0916 11:06:14.092754   36333 system_pods.go:89] "kube-apiserver-multinode-736061" [bb6b837b-db0a-455d-8055-ec513f470220] Running
	I0916 11:06:14.092760   36333 system_pods.go:89] "kube-controller-manager-multinode-736061" [53bb4e69-605c-4160-bf0a-f26e83e16ab1] Running
	I0916 11:06:14.092764   36333 system_pods.go:89] "kube-proxy-ftj9p" [fa72720f-1c4a-46a2-a733-f411ccb6f628] Running
	I0916 11:06:14.092772   36333 system_pods.go:89] "kube-scheduler-multinode-736061" [25a9a3ee-f264-4bd2-95fc-c8452bedc92b] Running
	I0916 11:06:14.092776   36333 system_pods.go:89] "storage-provisioner" [5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534] Running
	I0916 11:06:14.092782   36333 system_pods.go:126] duration metric: took 201.830102ms to wait for k8s-apps to be running ...
	I0916 11:06:14.092791   36333 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:06:14.092830   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:06:14.108124   36333 system_svc.go:56] duration metric: took 15.325ms WaitForService to wait for kubelet
	I0916 11:06:14.108152   36333 kubeadm.go:582] duration metric: took 15.5845367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:06:14.108173   36333 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:06:14.287839   36333 request.go:632] Waited for 179.59535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes
	I0916 11:06:14.287910   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes
	I0916 11:06:14.287923   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:14.287931   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:14.287936   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:14.290746   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:14.290764   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:14.290770   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:14.290774   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:14.290778   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:14.290781   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:14.290783   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:14 GMT
	I0916 11:06:14.290785   36333 round_trippers.go:580]     Audit-Id: c51e4b4c-4d6b-4976-bbaa-01dc82a04c9d
	I0916 11:06:14.290954   36333 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0916 11:06:14.291328   36333 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:06:14.291349   36333 node_conditions.go:123] node cpu capacity is 2
	I0916 11:06:14.291361   36333 node_conditions.go:105] duration metric: took 183.182804ms to run NodePressure ...
	I0916 11:06:14.291378   36333 start.go:241] waiting for startup goroutines ...
	I0916 11:06:14.291388   36333 start.go:246] waiting for cluster config update ...
	I0916 11:06:14.291397   36333 start.go:255] writing updated cluster config ...
	I0916 11:06:14.293212   36333 out.go:201] 
	I0916 11:06:14.294449   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:14.294514   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:06:14.295858   36333 out.go:177] * Starting "multinode-736061-m02" worker node in "multinode-736061" cluster
	I0916 11:06:14.296957   36333 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:06:14.296978   36333 cache.go:56] Caching tarball of preloaded images
	I0916 11:06:14.297080   36333 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:06:14.297090   36333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:06:14.297169   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:06:14.297332   36333 start.go:360] acquireMachinesLock for multinode-736061-m02: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:06:14.297376   36333 start.go:364] duration metric: took 26.88µs to acquireMachinesLock for "multinode-736061-m02"
	I0916 11:06:14.297392   36333 start.go:93] Provisioning new machine with config: &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 11:06:14.297453   36333 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 11:06:14.299005   36333 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 11:06:14.299098   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:14.299139   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:14.313697   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I0916 11:06:14.314138   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:14.314631   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:14.314652   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:14.314925   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:14.315113   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:14.315246   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:14.315398   36333 start.go:159] libmachine.API.Create for "multinode-736061" (driver="kvm2")
	I0916 11:06:14.315428   36333 client.go:168] LocalClient.Create starting
	I0916 11:06:14.315458   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 11:06:14.315488   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:06:14.315501   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:06:14.315551   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 11:06:14.315571   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:06:14.315581   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:06:14.315594   36333 main.go:141] libmachine: Running pre-create checks...
	I0916 11:06:14.315601   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .PreCreateCheck
	I0916 11:06:14.315736   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetConfigRaw
	I0916 11:06:14.316069   36333 main.go:141] libmachine: Creating machine...
	I0916 11:06:14.316081   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .Create
	I0916 11:06:14.316204   36333 main.go:141] libmachine: (multinode-736061-m02) Creating KVM machine...
	I0916 11:06:14.317493   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found existing default KVM network
	I0916 11:06:14.317650   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found existing private KVM network mk-multinode-736061
	I0916 11:06:14.317799   36333 main.go:141] libmachine: (multinode-736061-m02) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02 ...
	I0916 11:06:14.317817   36333 main.go:141] libmachine: (multinode-736061-m02) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 11:06:14.317887   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.317799   36743 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:06:14.317991   36333 main.go:141] libmachine: (multinode-736061-m02) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 11:06:14.549863   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.549740   36743 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa...
	I0916 11:06:14.787226   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.787096   36743 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/multinode-736061-m02.rawdisk...
	I0916 11:06:14.787254   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Writing magic tar header
	I0916 11:06:14.787268   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Writing SSH key tar header
	I0916 11:06:14.787278   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.787200   36743 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02 ...
	I0916 11:06:14.787317   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02
	I0916 11:06:14.787336   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02 (perms=drwx------)
	I0916 11:06:14.787363   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 11:06:14.787378   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:06:14.787390   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 11:06:14.787401   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 11:06:14.787414   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 11:06:14.787431   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 11:06:14.787452   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 11:06:14.787470   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 11:06:14.787482   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home
	I0916 11:06:14.787489   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 11:06:14.787495   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Skipping /home - not owner
	I0916 11:06:14.787501   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 11:06:14.787509   36333 main.go:141] libmachine: (multinode-736061-m02) Creating domain...
	I0916 11:06:14.788405   36333 main.go:141] libmachine: (multinode-736061-m02) define libvirt domain using xml: 
	I0916 11:06:14.788426   36333 main.go:141] libmachine: (multinode-736061-m02) <domain type='kvm'>
	I0916 11:06:14.788434   36333 main.go:141] libmachine: (multinode-736061-m02)   <name>multinode-736061-m02</name>
	I0916 11:06:14.788439   36333 main.go:141] libmachine: (multinode-736061-m02)   <memory unit='MiB'>2200</memory>
	I0916 11:06:14.788449   36333 main.go:141] libmachine: (multinode-736061-m02)   <vcpu>2</vcpu>
	I0916 11:06:14.788454   36333 main.go:141] libmachine: (multinode-736061-m02)   <features>
	I0916 11:06:14.788459   36333 main.go:141] libmachine: (multinode-736061-m02)     <acpi/>
	I0916 11:06:14.788463   36333 main.go:141] libmachine: (multinode-736061-m02)     <apic/>
	I0916 11:06:14.788468   36333 main.go:141] libmachine: (multinode-736061-m02)     <pae/>
	I0916 11:06:14.788476   36333 main.go:141] libmachine: (multinode-736061-m02)     
	I0916 11:06:14.788481   36333 main.go:141] libmachine: (multinode-736061-m02)   </features>
	I0916 11:06:14.788488   36333 main.go:141] libmachine: (multinode-736061-m02)   <cpu mode='host-passthrough'>
	I0916 11:06:14.788492   36333 main.go:141] libmachine: (multinode-736061-m02)   
	I0916 11:06:14.788496   36333 main.go:141] libmachine: (multinode-736061-m02)   </cpu>
	I0916 11:06:14.788501   36333 main.go:141] libmachine: (multinode-736061-m02)   <os>
	I0916 11:06:14.788507   36333 main.go:141] libmachine: (multinode-736061-m02)     <type>hvm</type>
	I0916 11:06:14.788513   36333 main.go:141] libmachine: (multinode-736061-m02)     <boot dev='cdrom'/>
	I0916 11:06:14.788523   36333 main.go:141] libmachine: (multinode-736061-m02)     <boot dev='hd'/>
	I0916 11:06:14.788529   36333 main.go:141] libmachine: (multinode-736061-m02)     <bootmenu enable='no'/>
	I0916 11:06:14.788533   36333 main.go:141] libmachine: (multinode-736061-m02)   </os>
	I0916 11:06:14.788538   36333 main.go:141] libmachine: (multinode-736061-m02)   <devices>
	I0916 11:06:14.788542   36333 main.go:141] libmachine: (multinode-736061-m02)     <disk type='file' device='cdrom'>
	I0916 11:06:14.788550   36333 main.go:141] libmachine: (multinode-736061-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/boot2docker.iso'/>
	I0916 11:06:14.788556   36333 main.go:141] libmachine: (multinode-736061-m02)       <target dev='hdc' bus='scsi'/>
	I0916 11:06:14.788561   36333 main.go:141] libmachine: (multinode-736061-m02)       <readonly/>
	I0916 11:06:14.788566   36333 main.go:141] libmachine: (multinode-736061-m02)     </disk>
	I0916 11:06:14.788573   36333 main.go:141] libmachine: (multinode-736061-m02)     <disk type='file' device='disk'>
	I0916 11:06:14.788583   36333 main.go:141] libmachine: (multinode-736061-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 11:06:14.788596   36333 main.go:141] libmachine: (multinode-736061-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/multinode-736061-m02.rawdisk'/>
	I0916 11:06:14.788607   36333 main.go:141] libmachine: (multinode-736061-m02)       <target dev='hda' bus='virtio'/>
	I0916 11:06:14.788613   36333 main.go:141] libmachine: (multinode-736061-m02)     </disk>
	I0916 11:06:14.788618   36333 main.go:141] libmachine: (multinode-736061-m02)     <interface type='network'>
	I0916 11:06:14.788624   36333 main.go:141] libmachine: (multinode-736061-m02)       <source network='mk-multinode-736061'/>
	I0916 11:06:14.788629   36333 main.go:141] libmachine: (multinode-736061-m02)       <model type='virtio'/>
	I0916 11:06:14.788634   36333 main.go:141] libmachine: (multinode-736061-m02)     </interface>
	I0916 11:06:14.788638   36333 main.go:141] libmachine: (multinode-736061-m02)     <interface type='network'>
	I0916 11:06:14.788644   36333 main.go:141] libmachine: (multinode-736061-m02)       <source network='default'/>
	I0916 11:06:14.788659   36333 main.go:141] libmachine: (multinode-736061-m02)       <model type='virtio'/>
	I0916 11:06:14.788666   36333 main.go:141] libmachine: (multinode-736061-m02)     </interface>
	I0916 11:06:14.788671   36333 main.go:141] libmachine: (multinode-736061-m02)     <serial type='pty'>
	I0916 11:06:14.788678   36333 main.go:141] libmachine: (multinode-736061-m02)       <target port='0'/>
	I0916 11:06:14.788685   36333 main.go:141] libmachine: (multinode-736061-m02)     </serial>
	I0916 11:06:14.788701   36333 main.go:141] libmachine: (multinode-736061-m02)     <console type='pty'>
	I0916 11:06:14.788717   36333 main.go:141] libmachine: (multinode-736061-m02)       <target type='serial' port='0'/>
	I0916 11:06:14.788756   36333 main.go:141] libmachine: (multinode-736061-m02)     </console>
	I0916 11:06:14.788776   36333 main.go:141] libmachine: (multinode-736061-m02)     <rng model='virtio'>
	I0916 11:06:14.788788   36333 main.go:141] libmachine: (multinode-736061-m02)       <backend model='random'>/dev/random</backend>
	I0916 11:06:14.788799   36333 main.go:141] libmachine: (multinode-736061-m02)     </rng>
	I0916 11:06:14.788810   36333 main.go:141] libmachine: (multinode-736061-m02)     
	I0916 11:06:14.788819   36333 main.go:141] libmachine: (multinode-736061-m02)     
	I0916 11:06:14.788829   36333 main.go:141] libmachine: (multinode-736061-m02)   </devices>
	I0916 11:06:14.788839   36333 main.go:141] libmachine: (multinode-736061-m02) </domain>
	I0916 11:06:14.788858   36333 main.go:141] libmachine: (multinode-736061-m02) 
	I0916 11:06:14.795470   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:f7:d3:a0 in network default
	I0916 11:06:14.796000   36333 main.go:141] libmachine: (multinode-736061-m02) Ensuring networks are active...
	I0916 11:06:14.796022   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:14.796683   36333 main.go:141] libmachine: (multinode-736061-m02) Ensuring network default is active
	I0916 11:06:14.796930   36333 main.go:141] libmachine: (multinode-736061-m02) Ensuring network mk-multinode-736061 is active
	I0916 11:06:14.797372   36333 main.go:141] libmachine: (multinode-736061-m02) Getting domain xml...
	I0916 11:06:14.798084   36333 main.go:141] libmachine: (multinode-736061-m02) Creating domain...
	I0916 11:06:15.994264   36333 main.go:141] libmachine: (multinode-736061-m02) Waiting to get IP...
	I0916 11:06:15.995084   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:15.995470   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:15.995503   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:15.995466   36743 retry.go:31] will retry after 256.165137ms: waiting for machine to come up
	I0916 11:06:16.252819   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:16.253216   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:16.253247   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:16.253174   36743 retry.go:31] will retry after 256.581641ms: waiting for machine to come up
	I0916 11:06:16.511597   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:16.512046   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:16.512078   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:16.511989   36743 retry.go:31] will retry after 470.100013ms: waiting for machine to come up
	I0916 11:06:16.983320   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:16.983794   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:16.983822   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:16.983738   36743 retry.go:31] will retry after 481.533252ms: waiting for machine to come up
	I0916 11:06:17.466315   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:17.466714   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:17.466739   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:17.466674   36743 retry.go:31] will retry after 526.97274ms: waiting for machine to come up
	I0916 11:06:17.995390   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:17.995770   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:17.995797   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:17.995725   36743 retry.go:31] will retry after 715.156872ms: waiting for machine to come up
	I0916 11:06:18.712619   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:18.712975   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:18.713005   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:18.712955   36743 retry.go:31] will retry after 1.04953302s: waiting for machine to come up
	I0916 11:06:19.764242   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:19.764720   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:19.764746   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:19.764678   36743 retry.go:31] will retry after 1.464498529s: waiting for machine to come up
	I0916 11:06:21.231491   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:21.231895   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:21.231924   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:21.231846   36743 retry.go:31] will retry after 1.276932559s: waiting for machine to come up
	I0916 11:06:22.510085   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:22.510462   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:22.510492   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:22.510406   36743 retry.go:31] will retry after 2.116322467s: waiting for machine to come up
	I0916 11:06:24.628072   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:24.628517   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:24.628565   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:24.628488   36743 retry.go:31] will retry after 1.82576742s: waiting for machine to come up
	I0916 11:06:26.456449   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:26.456879   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:26.456902   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:26.456832   36743 retry.go:31] will retry after 3.525211369s: waiting for machine to come up
	I0916 11:06:29.983080   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:29.983452   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:29.983481   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:29.983403   36743 retry.go:31] will retry after 4.1489865s: waiting for machine to come up
	I0916 11:06:34.136632   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:34.137015   36333 main.go:141] libmachine: (multinode-736061-m02) Found IP for machine: 192.168.39.215
	I0916 11:06:34.137038   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has current primary IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:34.137046   36333 main.go:141] libmachine: (multinode-736061-m02) Reserving static IP address...
	I0916 11:06:34.137377   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find host DHCP lease matching {name: "multinode-736061-m02", mac: "52:54:00:ab:7f:3f", ip: "192.168.39.215"} in network mk-multinode-736061
	I0916 11:06:34.212620   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Getting to WaitForSSH function...
	I0916 11:06:34.212664   36333 main.go:141] libmachine: (multinode-736061-m02) Reserved static IP address: 192.168.39.215
	I0916 11:06:34.212684   36333 main.go:141] libmachine: (multinode-736061-m02) Waiting for SSH to be available...
	I0916 11:06:34.215237   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:34.215601   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061
	I0916 11:06:34.215624   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find defined IP address of network mk-multinode-736061 interface with MAC address 52:54:00:ab:7f:3f
	I0916 11:06:34.215724   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH client type: external
	I0916 11:06:34.215747   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa (-rw-------)
	I0916 11:06:34.215785   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:06:34.215799   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | About to run SSH command:
	I0916 11:06:34.215813   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | exit 0
	I0916 11:06:34.219441   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | SSH cmd err, output: exit status 255: 
	I0916 11:06:34.219459   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 11:06:34.219479   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | command : exit 0
	I0916 11:06:34.219486   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | err     : exit status 255
	I0916 11:06:34.219500   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | output  : 
	I0916 11:06:37.221305   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Getting to WaitForSSH function...
	I0916 11:06:37.223785   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.224218   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.224249   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.224389   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH client type: external
	I0916 11:06:37.224425   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa (-rw-------)
	I0916 11:06:37.224453   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:06:37.224466   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | About to run SSH command:
	I0916 11:06:37.224478   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | exit 0
	I0916 11:06:37.353335   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 11:06:37.353612   36333 main.go:141] libmachine: (multinode-736061-m02) KVM machine creation complete!
	I0916 11:06:37.353916   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetConfigRaw
	I0916 11:06:37.354454   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:37.354670   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:37.354813   36333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 11:06:37.354837   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetState
	I0916 11:06:37.356155   36333 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 11:06:37.356168   36333 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 11:06:37.356173   36333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 11:06:37.356178   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.358470   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.358821   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.358851   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.359033   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.359202   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.359379   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.359518   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.359712   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.359921   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.359934   36333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 11:06:37.472504   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:06:37.472530   36333 main.go:141] libmachine: Detecting the provisioner...
	I0916 11:06:37.472541   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.475233   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.475607   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.475636   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.475857   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.476043   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.476177   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.476273   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.476421   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.476603   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.476615   36333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 11:06:37.589999   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 11:06:37.590067   36333 main.go:141] libmachine: found compatible host: buildroot
	I0916 11:06:37.590080   36333 main.go:141] libmachine: Provisioning with buildroot...
	I0916 11:06:37.590090   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:37.590330   36333 buildroot.go:166] provisioning hostname "multinode-736061-m02"
	I0916 11:06:37.590353   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:37.590535   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.593099   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.593511   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.593545   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.593707   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.593913   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.594073   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.594252   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.594426   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.594610   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.594626   36333 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061-m02 && echo "multinode-736061-m02" | sudo tee /etc/hostname
	I0916 11:06:37.725054   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061-m02
	
	I0916 11:06:37.725083   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.727908   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.728266   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.728290   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.728459   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.728603   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.728791   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.728929   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.729108   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.729301   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.729318   36333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:06:37.850812   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:06:37.850838   36333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:06:37.850861   36333 buildroot.go:174] setting up certificates
	I0916 11:06:37.850873   36333 provision.go:84] configureAuth start
	I0916 11:06:37.850887   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:37.851152   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:37.853960   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.854316   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.854352   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.854551   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.857790   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.858201   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.858229   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.858390   36333 provision.go:143] copyHostCerts
	I0916 11:06:37.858422   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:06:37.858461   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:06:37.858470   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:06:37.858532   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:06:37.858604   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:06:37.858621   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:06:37.858634   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:06:37.858659   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:06:37.858701   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:06:37.858718   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:06:37.858724   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:06:37.858743   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:06:37.858790   36333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061-m02 san=[127.0.0.1 192.168.39.215 localhost minikube multinode-736061-m02]
	I0916 11:06:37.923156   36333 provision.go:177] copyRemoteCerts
	I0916 11:06:37.923208   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:06:37.923231   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.925836   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.926258   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.926290   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.926437   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.926626   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.926793   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.926926   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.012129   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:06:38.012207   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:06:38.037100   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:06:38.037189   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 11:06:38.061563   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:06:38.061639   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:06:38.086240   36333 provision.go:87] duration metric: took 235.355849ms to configureAuth
	I0916 11:06:38.086275   36333 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:06:38.086480   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:38.086569   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.089063   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.089497   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.089523   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.089726   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.089949   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.090094   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.090233   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.090377   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:38.090580   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:38.090606   36333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:06:38.321227   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:06:38.321256   36333 main.go:141] libmachine: Checking connection to Docker...
	I0916 11:06:38.321267   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetURL
	I0916 11:06:38.322472   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using libvirt version 6000000
	I0916 11:06:38.324838   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.325188   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.325217   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.325403   36333 main.go:141] libmachine: Docker is up and running!
	I0916 11:06:38.325423   36333 main.go:141] libmachine: Reticulating splines...
	I0916 11:06:38.325430   36333 client.go:171] duration metric: took 24.009992581s to LocalClient.Create
	I0916 11:06:38.325453   36333 start.go:167] duration metric: took 24.010057312s to libmachine.API.Create "multinode-736061"
	I0916 11:06:38.325463   36333 start.go:293] postStartSetup for "multinode-736061-m02" (driver="kvm2")
	I0916 11:06:38.325472   36333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:06:38.325488   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.325735   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:06:38.325761   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.327885   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.328246   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.328274   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.328401   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.328576   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.328755   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.328893   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.417551   36333 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:06:38.422302   36333 command_runner.go:130] > NAME=Buildroot
	I0916 11:06:38.422325   36333 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:06:38.422330   36333 command_runner.go:130] > ID=buildroot
	I0916 11:06:38.422338   36333 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:06:38.422344   36333 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:06:38.422380   36333 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:06:38.422396   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:06:38.422482   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:06:38.422578   36333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:06:38.422590   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:06:38.422721   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:06:38.432790   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:06:38.457473   36333 start.go:296] duration metric: took 131.99444ms for postStartSetup
	I0916 11:06:38.457527   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetConfigRaw
	I0916 11:06:38.458085   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:38.460620   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.461040   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.461064   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.461314   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:06:38.461550   36333 start.go:128] duration metric: took 24.164086939s to createHost
	I0916 11:06:38.461575   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.463833   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.464136   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.464164   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.464287   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.464459   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.464618   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.464770   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.464924   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:38.465074   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:38.465083   36333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:06:38.578075   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726484798.554835726
	
	I0916 11:06:38.578110   36333 fix.go:216] guest clock: 1726484798.554835726
	I0916 11:06:38.578122   36333 fix.go:229] Guest: 2024-09-16 11:06:38.554835726 +0000 UTC Remote: 2024-09-16 11:06:38.461564512 +0000 UTC m=+84.272513037 (delta=93.271214ms)
	I0916 11:06:38.578147   36333 fix.go:200] guest clock delta is within tolerance: 93.271214ms
	I0916 11:06:38.578155   36333 start.go:83] releasing machines lock for "multinode-736061-m02", held for 24.28076935s
	I0916 11:06:38.578186   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.578431   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:38.580628   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.580912   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.580938   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.583166   36333 out.go:177] * Found network options:
	I0916 11:06:38.584510   36333 out.go:177]   - NO_PROXY=192.168.39.32
	W0916 11:06:38.585730   36333 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 11:06:38.585774   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.586207   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.586373   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.586488   36333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:06:38.586529   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	W0916 11:06:38.586555   36333 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 11:06:38.586627   36333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:06:38.586659   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.589111   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589441   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589464   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.589478   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589653   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.589828   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.589920   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.589945   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589969   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.590114   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.590137   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.590297   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.590453   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.590573   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.833457   36333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:06:38.833460   36333 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:06:38.840018   36333 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:06:38.840068   36333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:06:38.840119   36333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:06:38.857271   36333 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0916 11:06:38.857340   36333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 11:06:38.857352   36333 start.go:495] detecting cgroup driver to use...
	I0916 11:06:38.857422   36333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:06:38.874145   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:06:38.889311   36333 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:06:38.889384   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:06:38.904072   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:06:38.918465   36333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:06:38.939615   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/cri-docker.socket".
	I0916 11:06:39.039841   36333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:06:39.055232   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 11:06:39.204329   36333 docker.go:233] disabling docker service ...
	I0916 11:06:39.204407   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:06:39.219106   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:06:39.231775   36333 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0916 11:06:39.232015   36333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:06:39.246736   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/docker.socket".
	I0916 11:06:39.352695   36333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:06:39.366724   36333 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0916 11:06:39.367009   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 11:06:39.477374   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:06:39.491313   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:06:39.509431   36333 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:06:39.509664   36333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:06:39.509720   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.519949   36333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:06:39.520006   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.530312   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.540682   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.551053   36333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:06:39.561350   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.571523   36333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.588521   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.598451   36333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:06:39.607608   36333 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:06:39.607821   36333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:06:39.607895   36333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 11:06:39.620469   36333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:06:39.630421   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:06:39.757829   36333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:06:39.848762   36333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:06:39.848837   36333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:06:39.853344   36333 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:06:39.853378   36333 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:06:39.853387   36333 command_runner.go:130] > Device: 0,22	Inode: 692         Links: 1
	I0916 11:06:39.853397   36333 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:06:39.853406   36333 command_runner.go:130] > Access: 2024-09-16 11:06:39.819235404 +0000
	I0916 11:06:39.853417   36333 command_runner.go:130] > Modify: 2024-09-16 11:06:39.819235404 +0000
	I0916 11:06:39.853425   36333 command_runner.go:130] > Change: 2024-09-16 11:06:39.819235404 +0000
	I0916 11:06:39.853435   36333 command_runner.go:130] >  Birth: -
	I0916 11:06:39.853468   36333 start.go:563] Will wait 60s for crictl version
	I0916 11:06:39.853509   36333 ssh_runner.go:195] Run: which crictl
	I0916 11:06:39.857444   36333 command_runner.go:130] > /usr/bin/crictl
	I0916 11:06:39.857673   36333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:06:39.894954   36333 command_runner.go:130] > Version:  0.1.0
	I0916 11:06:39.894981   36333 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:06:39.894988   36333 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:06:39.894995   36333 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:06:39.895019   36333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:06:39.895097   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:06:39.923765   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:06:39.923790   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:06:39.923800   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:06:39.923806   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:06:39.923814   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:06:39.923824   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:06:39.923830   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:06:39.923837   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:06:39.923846   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:06:39.923853   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:06:39.923861   36333 command_runner.go:130] > BuildTags:      
	I0916 11:06:39.923871   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:06:39.923885   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:06:39.923920   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:06:39.923931   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:06:39.923939   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:06:39.923945   36333 command_runner.go:130] >   seccomp
	I0916 11:06:39.923955   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:06:39.923962   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:06:39.923969   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:06:39.924983   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:06:39.952254   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:06:39.952273   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:06:39.952278   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:06:39.952282   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:06:39.952286   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:06:39.952292   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:06:39.952296   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:06:39.952299   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:06:39.952303   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:06:39.952307   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:06:39.952312   36333 command_runner.go:130] > BuildTags:      
	I0916 11:06:39.952316   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:06:39.952320   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:06:39.952323   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:06:39.952328   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:06:39.952332   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:06:39.952335   36333 command_runner.go:130] >   seccomp
	I0916 11:06:39.952340   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:06:39.952347   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:06:39.952351   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:06:39.954973   36333 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:06:39.956239   36333 out.go:177]   - env NO_PROXY=192.168.39.32
	I0916 11:06:39.957336   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:39.959778   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:39.960172   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:39.960201   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:39.960447   36333 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:06:39.964564   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:06:39.976775   36333 mustload.go:65] Loading cluster: multinode-736061
	I0916 11:06:39.976995   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:39.977326   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:39.977370   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:39.991897   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0916 11:06:39.992285   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:39.992706   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:39.992727   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:39.993009   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:39.993201   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:06:39.994739   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:06:39.995067   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:39.995107   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:40.009297   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0916 11:06:40.009718   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:40.010162   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:40.010181   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:40.010475   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:40.010666   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:06:40.010796   36333 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.215
	I0916 11:06:40.010808   36333 certs.go:194] generating shared ca certs ...
	I0916 11:06:40.010827   36333 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:06:40.010960   36333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:06:40.011012   36333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:06:40.011029   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:06:40.011051   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:06:40.011069   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:06:40.011088   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:06:40.011150   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:06:40.011188   36333 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:06:40.011201   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:06:40.011234   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:06:40.011266   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:06:40.011300   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:06:40.011355   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:06:40.011395   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.011414   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.011433   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.011460   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:06:40.036948   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:06:40.064224   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:06:40.087718   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:06:40.112736   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:06:40.136429   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:06:40.160538   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:06:40.184215   36333 ssh_runner.go:195] Run: openssl version
	I0916 11:06:40.190212   36333 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:06:40.190294   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:06:40.201031   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.205421   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.205541   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.205595   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.211146   36333 command_runner.go:130] > b5213941
	I0916 11:06:40.211346   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:06:40.222442   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:06:40.233468   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.237653   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.237872   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.237943   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.243562   36333 command_runner.go:130] > 51391683
	I0916 11:06:40.243642   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:06:40.254028   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:06:40.264085   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.268310   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.268436   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.268485   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.274044   36333 command_runner.go:130] > 3ec20f2e
	I0916 11:06:40.274103   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:06:40.284368   36333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:06:40.288308   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:06:40.288452   36333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:06:40.288496   36333 kubeadm.go:934] updating node {m02 192.168.39.215 8443 v1.31.1 crio false true} ...
	I0916 11:06:40.288609   36333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:06:40.288669   36333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:06:40.297456   36333 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	I0916 11:06:40.297575   36333 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:06:40.297646   36333 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:06:40.307147   36333 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 11:06:40.307166   36333 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 11:06:40.307178   36333 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:06:40.307183   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:06:40.307195   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:06:40.307196   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:06:40.307242   36333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:06:40.307255   36333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:06:40.323894   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:06:40.323930   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:06:40.323989   36333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:06:40.324000   36333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:06:40.324017   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:06:40.324025   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:06:40.324077   36333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:06:40.324099   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:06:40.351979   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:06:40.359991   36333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:06:40.360045   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:06:41.140271   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 11:06:41.150182   36333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0916 11:06:41.166961   36333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:06:41.185279   36333 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:06:41.189266   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:06:41.202395   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:06:41.334758   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:06:41.353018   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:06:41.353407   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:41.353465   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:41.368533   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0916 11:06:41.368969   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:41.369438   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:41.369463   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:41.369762   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:41.369969   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:06:41.370125   36333 start.go:317] joinCluster: &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:06:41.370241   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 11:06:41.370266   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:06:41.373080   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:06:41.373539   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:06:41.373562   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:06:41.373699   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:06:41.373850   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:06:41.373982   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:06:41.374133   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:06:41.524071   36333 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ktop33.r4upqd8kmtc2z9di --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:06:41.524259   36333 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 11:06:41.524306   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktop33.r4upqd8kmtc2z9di --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-736061-m02"
	I0916 11:06:41.573303   36333 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 11:06:41.675528   36333 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0916 11:06:41.675557   36333 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0916 11:06:41.719707   36333 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:06:41.719740   36333 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:06:41.719746   36333 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 11:06:41.857233   36333 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:06:42.358605   36333 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.857577ms
	I0916 11:06:42.358632   36333 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0916 11:06:42.873485   36333 command_runner.go:130] > This node has joined the cluster:
	I0916 11:06:42.873512   36333 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0916 11:06:42.873522   36333 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0916 11:06:42.873530   36333 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0916 11:06:42.875319   36333 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:06:42.875357   36333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktop33.r4upqd8kmtc2z9di --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-736061-m02": (1.351026287s)
	I0916 11:06:42.875382   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 11:06:43.009989   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0916 11:06:43.134073   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-736061-m02 minikube.k8s.io/updated_at=2024_09_16T11_06_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-736061 minikube.k8s.io/primary=false
	I0916 11:06:43.231128   36333 command_runner.go:130] > node/multinode-736061-m02 labeled
	I0916 11:06:43.233155   36333 start.go:319] duration metric: took 1.863029493s to joinCluster
	I0916 11:06:43.233210   36333 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 11:06:43.233480   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:43.235299   36333 out.go:177] * Verifying Kubernetes components...
	I0916 11:06:43.236419   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:06:43.364788   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:06:43.380967   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:06:43.381302   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:06:43.381632   36333 node_ready.go:35] waiting up to 6m0s for node "multinode-736061-m02" to be "Ready" ...
	I0916 11:06:43.381707   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:43.381718   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:43.381728   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:43.381734   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:43.383721   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:43.383743   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:43.383750   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:43.383754   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:43 GMT
	I0916 11:06:43.383757   36333 round_trippers.go:580]     Audit-Id: a10c208c-b7a1-4fde-8f1f-80e81dbc5bd7
	I0916 11:06:43.383762   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:43.383767   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:43.383773   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:43.383781   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:43.383862   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:43.881816   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:43.881848   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:43.881859   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:43.881864   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:43.884298   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:43.884315   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:43.884321   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:43.884325   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:43.884329   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:43 GMT
	I0916 11:06:43.884333   36333 round_trippers.go:580]     Audit-Id: 1a59a83b-12f1-49a2-b3ee-6f00e880e1a9
	I0916 11:06:43.884336   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:43.884338   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:43.884341   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:43.884470   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:44.382535   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:44.382558   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:44.382566   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:44.382571   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:44.385111   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:44.385148   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:44.385158   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:44.385164   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:44.385169   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:44.385174   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:44.385178   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:44.385183   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:44 GMT
	I0916 11:06:44.385189   36333 round_trippers.go:580]     Audit-Id: 094d8e5e-fdfe-4a4d-95ee-7f0b3e416a1f
	I0916 11:06:44.385282   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:44.882432   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:44.882463   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:44.882474   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:44.882492   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:44.885416   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:44.885446   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:44.885457   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:44 GMT
	I0916 11:06:44.885463   36333 round_trippers.go:580]     Audit-Id: 9f093c50-aaf7-40b0-867f-b2994fa44369
	I0916 11:06:44.885467   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:44.885472   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:44.885476   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:44.885484   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:44.885489   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:44.885588   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:45.382050   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:45.382074   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:45.382083   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:45.382088   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:45.384871   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:45.384897   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:45.384903   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:45 GMT
	I0916 11:06:45.384907   36333 round_trippers.go:580]     Audit-Id: 86e154b2-0210-4ad5-a407-bd78a7bc86cd
	I0916 11:06:45.384910   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:45.384912   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:45.384915   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:45.384918   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:45.384922   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:45.385050   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:45.385320   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:45.882672   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:45.882696   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:45.882703   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:45.882708   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:45.884907   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:45.884933   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:45.884942   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:45.884949   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:45.884953   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:45 GMT
	I0916 11:06:45.884960   36333 round_trippers.go:580]     Audit-Id: d156b24c-8300-48ed-8965-e174644374ed
	I0916 11:06:45.884964   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:45.884969   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:45.884974   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:45.885065   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:46.381849   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:46.381878   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:46.381890   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:46.381899   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:46.384768   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:46.384797   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:46.384808   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:46.384816   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:46.384824   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:46.384830   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:46 GMT
	I0916 11:06:46.384836   36333 round_trippers.go:580]     Audit-Id: 7a9e3625-e165-4256-8512-218a106f5e3a
	I0916 11:06:46.384845   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:46.384851   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:46.384899   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:46.882413   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:46.882440   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:46.882451   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:46.882456   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:46.885312   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:46.885333   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:46.885343   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:46.885349   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:46.885354   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:46.885359   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:46.885363   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:46 GMT
	I0916 11:06:46.885367   36333 round_trippers.go:580]     Audit-Id: 28f7596c-dfd2-4619-899e-d678c084e485
	I0916 11:06:46.885372   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:46.885459   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:47.381854   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:47.381879   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:47.381895   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:47.381904   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:47.384790   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:47.384817   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:47.384826   36333 round_trippers.go:580]     Audit-Id: c345c256-16eb-407d-9254-63e517bdedce
	I0916 11:06:47.384832   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:47.384836   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:47.384840   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:47.384845   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:47.384849   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:47.384854   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:47 GMT
	I0916 11:06:47.384956   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:47.882462   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:47.882485   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:47.882502   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:47.882509   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:47.885713   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:47.885741   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:47.885751   36333 round_trippers.go:580]     Audit-Id: f510e61e-040d-4f8f-b503-56627e582690
	I0916 11:06:47.885758   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:47.885764   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:47.885775   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:47.885782   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:47.885786   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:47.885791   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:47 GMT
	I0916 11:06:47.885886   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:47.886227   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:48.382422   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:48.382444   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:48.382452   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:48.382457   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:48.384713   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:48.384732   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:48.384740   36333 round_trippers.go:580]     Audit-Id: 2afd3533-ebe5-4c1a-b3cb-2ea790e62521
	I0916 11:06:48.384746   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:48.384752   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:48.384757   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:48.384760   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:48.384765   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:48.384770   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:48 GMT
	I0916 11:06:48.384881   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:48.882354   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:48.882373   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:48.882381   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:48.882386   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:48.884914   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:48.884946   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:48.884957   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:48.884962   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:48.884969   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:48.884974   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:48 GMT
	I0916 11:06:48.884979   36333 round_trippers.go:580]     Audit-Id: c43f9f40-ee7e-42ca-a8ae-8022970ad57c
	I0916 11:06:48.884986   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:48.884990   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:48.885089   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:49.382516   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:49.382541   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:49.382550   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:49.382554   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:49.385228   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:49.385247   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:49.385252   36333 round_trippers.go:580]     Audit-Id: 0aa15424-b102-44a3-8b56-d340a4fb6238
	I0916 11:06:49.385256   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:49.385261   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:49.385265   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:49.385269   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:49.385274   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:49.385278   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:49 GMT
	I0916 11:06:49.385364   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:49.881958   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:49.881983   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:49.881991   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:49.881994   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:49.884381   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:49.884404   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:49.884413   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:49.884417   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:49.884421   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:49.884426   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:49.884430   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:49.884434   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:49 GMT
	I0916 11:06:49.884442   36333 round_trippers.go:580]     Audit-Id: 7939f888-97da-4ba4-a037-cfb04412c20c
	I0916 11:06:49.884479   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:50.382331   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:50.382356   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:50.382366   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:50.382370   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:50.384609   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:50.384635   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:50.384642   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:50 GMT
	I0916 11:06:50.384645   36333 round_trippers.go:580]     Audit-Id: 712bef41-6dff-4407-8056-477afe713b8c
	I0916 11:06:50.384648   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:50.384650   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:50.384653   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:50.384657   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:50.384661   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:50.384753   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:50.385064   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:50.882316   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:50.882340   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:50.882350   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:50.882357   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:50.885443   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:50.885462   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:50.885471   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:50.885477   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:50.885481   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:50.885485   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:50 GMT
	I0916 11:06:50.885489   36333 round_trippers.go:580]     Audit-Id: 8fd12666-d052-4829-8511-a6204426d5a4
	I0916 11:06:50.885494   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:50.885498   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:50.885582   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:51.382753   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:51.382778   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:51.382788   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:51.382800   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:51.385381   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:51.385409   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:51.385419   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:51.385424   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:51 GMT
	I0916 11:06:51.385428   36333 round_trippers.go:580]     Audit-Id: 70599105-80ef-4c68-8819-cfb396182ddc
	I0916 11:06:51.385433   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:51.385437   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:51.385441   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:51.385446   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:51.385529   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:51.882064   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:51.882089   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:51.882097   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:51.882102   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:51.884362   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:51.884381   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:51.884390   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:51.884401   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:51.884407   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:51.884412   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:51 GMT
	I0916 11:06:51.884422   36333 round_trippers.go:580]     Audit-Id: cb600bbb-d6ef-4d33-8d75-2e065937d899
	I0916 11:06:51.884427   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:51.884432   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:51.884502   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:52.382075   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:52.382100   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:52.382109   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:52.382115   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:52.384382   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:52.384398   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:52.384405   36333 round_trippers.go:580]     Audit-Id: 020d013d-bcf2-4075-bc7d-696fbc115986
	I0916 11:06:52.384409   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:52.384411   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:52.384414   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:52.384417   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:52.384420   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:52.384423   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:52 GMT
	I0916 11:06:52.384493   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:52.882074   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:52.882100   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:52.882107   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:52.882111   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:52.884365   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:52.884382   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:52.884389   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:52.884393   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:52.884396   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:52 GMT
	I0916 11:06:52.884399   36333 round_trippers.go:580]     Audit-Id: 445f526b-a180-4591-8a67-3dd73e0ade74
	I0916 11:06:52.884402   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:52.884405   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:52.885040   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:52.885302   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:53.382865   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:53.382895   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:53.382903   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:53.382908   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:53.385467   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:53.385485   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:53.385491   36333 round_trippers.go:580]     Audit-Id: 159c7c33-38df-42fc-b405-77ea15053fbd
	I0916 11:06:53.385496   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:53.385499   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:53.385502   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:53.385506   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:53.385512   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:53 GMT
	I0916 11:06:53.385913   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:53.882376   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:53.882402   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:53.882410   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:53.882414   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:53.885057   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:53.885079   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:53.885088   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:53.885093   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:53.885099   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:53 GMT
	I0916 11:06:53.885103   36333 round_trippers.go:580]     Audit-Id: 9cc4d135-b9cf-40e3-873d-3a59e3dfb0b4
	I0916 11:06:53.885106   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:53.885110   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:53.885220   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:54.382165   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:54.382187   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:54.382195   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:54.382199   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:54.384622   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:54.384643   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:54.384652   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:54.384656   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:54.384660   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:54.384663   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:54 GMT
	I0916 11:06:54.384667   36333 round_trippers.go:580]     Audit-Id: e6ca69c6-61bc-403a-976f-ab39a0472feb
	I0916 11:06:54.384671   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:54.385072   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:54.882800   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:54.882826   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:54.882833   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:54.882837   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:54.885722   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:54.885743   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:54.885750   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:54.885755   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:54.885758   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:54.885763   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:54 GMT
	I0916 11:06:54.885766   36333 round_trippers.go:580]     Audit-Id: ae5070ec-526e-4a83-8a43-764e2c562a48
	I0916 11:06:54.885769   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:54.886443   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:54.886687   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:55.382026   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:55.382048   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:55.382060   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:55.382066   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:55.384356   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:55.384373   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:55.384380   36333 round_trippers.go:580]     Audit-Id: cf2b89b2-9267-4b01-ac09-56320e98bc39
	I0916 11:06:55.384382   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:55.384385   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:55.384389   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:55.384392   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:55.384395   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:55 GMT
	I0916 11:06:55.384820   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:55.882567   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:55.882598   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:55.882609   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:55.882614   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:55.884990   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:55.885008   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:55.885014   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:55.885020   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:55.885024   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:55 GMT
	I0916 11:06:55.885027   36333 round_trippers.go:580]     Audit-Id: e3c306d0-9a3a-41df-9b9f-5860cf843392
	I0916 11:06:55.885030   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:55.885033   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:55.885494   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:56.382660   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:56.382688   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:56.382699   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:56.382704   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:56.385051   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:56.385068   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:56.385073   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:56.385077   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:56.385080   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:56 GMT
	I0916 11:06:56.385083   36333 round_trippers.go:580]     Audit-Id: 9021ec18-5d40-4f1b-b395-a964b7aea360
	I0916 11:06:56.385085   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:56.385088   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:56.385274   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:56.881866   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:56.881901   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:56.881909   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:56.881913   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:56.884689   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:56.884711   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:56.884720   36333 round_trippers.go:580]     Audit-Id: da8cdbfb-d204-4df9-9862-d23b33825201
	I0916 11:06:56.884728   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:56.884734   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:56.884741   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:56.884743   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:56.884746   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:56 GMT
	I0916 11:06:56.885025   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:57.382757   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:57.382786   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:57.382795   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:57.382800   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:57.385312   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:57.385331   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:57.385338   36333 round_trippers.go:580]     Audit-Id: 66498600-b33f-4331-ad10-139d2901440e
	I0916 11:06:57.385342   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:57.385346   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:57.385348   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:57.385351   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:57.385356   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:57 GMT
	I0916 11:06:57.385882   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:57.386135   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:57.882613   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:57.882635   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:57.882643   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:57.882648   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:57.885194   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:57.885220   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:57.885230   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:57.885236   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:57 GMT
	I0916 11:06:57.885241   36333 round_trippers.go:580]     Audit-Id: 4972ebdb-dc0a-4ff9-aa5a-0a15423a4700
	I0916 11:06:57.885245   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:57.885249   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:57.885253   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:57.885499   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:58.381907   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:58.381935   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:58.381945   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:58.381950   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:58.384907   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:58.384928   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:58.384934   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:58 GMT
	I0916 11:06:58.384937   36333 round_trippers.go:580]     Audit-Id: 19f54c6d-f0a0-4989-8399-2a2325100b86
	I0916 11:06:58.384941   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:58.384944   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:58.384948   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:58.384950   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:58.385339   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:58.882365   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:58.882386   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:58.882395   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:58.882401   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:58.884915   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:58.884931   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:58.884936   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:58.884942   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:58.884947   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:58.884961   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:58 GMT
	I0916 11:06:58.884965   36333 round_trippers.go:580]     Audit-Id: 8bcb2b1f-f2f6-40e3-9089-9868e3c135c5
	I0916 11:06:58.884969   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:58.885219   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:59.382036   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:59.382058   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:59.382066   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:59.382069   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:59.384564   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:59.384583   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:59.384589   36333 round_trippers.go:580]     Audit-Id: d2abf775-506e-4891-bbef-b131671b3ef7
	I0916 11:06:59.384594   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:59.384598   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:59.384600   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:59.384606   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:59.384609   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:59 GMT
	I0916 11:06:59.384765   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:59.882487   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:59.882512   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:59.882520   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:59.882525   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:59.885557   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:59.885578   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:59.885584   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:59 GMT
	I0916 11:06:59.885589   36333 round_trippers.go:580]     Audit-Id: b99755b0-880d-44c4-8e71-b9b3f7058ee8
	I0916 11:06:59.885592   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:59.885594   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:59.885598   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:59.885602   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:59.885896   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:59.886155   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:07:00.382398   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:00.382422   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:00.382434   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:00.382439   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:00.384903   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:00.384920   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:00.384927   36333 round_trippers.go:580]     Audit-Id: 9c49b042-c266-45ce-82de-a79d303a2328
	I0916 11:07:00.384931   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:00.384934   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:00.384937   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:00.384940   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:00.384943   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:00 GMT
	I0916 11:07:00.385256   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:00.881904   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:00.881932   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:00.881942   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:00.881953   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:00.884941   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:00.884963   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:00.884970   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:00.884973   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:00.884976   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:00 GMT
	I0916 11:07:00.884979   36333 round_trippers.go:580]     Audit-Id: 48df80dc-5927-47e2-bc47-b1c911c89063
	I0916 11:07:00.884983   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:00.884985   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:00.885438   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:01.382265   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:01.382288   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:01.382296   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:01.382299   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:01.384782   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:01.384801   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:01.384808   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:01.384812   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:01.384815   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:01.384817   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:01 GMT
	I0916 11:07:01.384820   36333 round_trippers.go:580]     Audit-Id: 5f99dcd6-ebc5-4cb0-b401-fc505686a655
	I0916 11:07:01.384822   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:01.385004   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:01.882369   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:01.882394   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:01.882402   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:01.882406   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:01.884939   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:01.884962   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:01.884970   36333 round_trippers.go:580]     Audit-Id: 1cfa49ba-8e0f-4597-b973-d2575e71d839
	I0916 11:07:01.884977   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:01.884984   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:01.884988   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:01.884992   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:01.885001   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:01 GMT
	I0916 11:07:01.885169   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:02.381818   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:02.381845   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.381853   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.381858   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.384139   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.384157   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.384163   36333 round_trippers.go:580]     Audit-Id: 11912aa8-84f7-4ab4-b0ea-de423df6f5ed
	I0916 11:07:02.384166   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.384169   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.384172   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.384174   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.384177   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.384438   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"525","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3261 chars]
	I0916 11:07:02.384724   36333 node_ready.go:49] node "multinode-736061-m02" has status "Ready":"True"
	I0916 11:07:02.384745   36333 node_ready.go:38] duration metric: took 19.003097722s for node "multinode-736061-m02" to be "Ready" ...
	I0916 11:07:02.384757   36333 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:07:02.384835   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:07:02.384847   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.384857   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.384860   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.387695   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.387714   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.387723   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.387728   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.387732   36333 round_trippers.go:580]     Audit-Id: db6fdfe5-ace7-4820-9b1f-a954aa0b1dfd
	I0916 11:07:02.387736   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.387740   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.387748   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.388801   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"526"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 72116 chars]
	I0916 11:07:02.390902   36333 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.390994   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:07:02.391003   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.391010   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.391013   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.393147   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.393167   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.393173   36333 round_trippers.go:580]     Audit-Id: 4649f039-f37f-4522-98b6-8b05a4e38fc3
	I0916 11:07:02.393177   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.393182   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.393185   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.393188   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.393190   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.393347   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6776 chars]
	I0916 11:07:02.393769   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.393780   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.393787   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.393791   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.395573   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.395587   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.395591   36333 round_trippers.go:580]     Audit-Id: fe5c1b9e-9bbb-4466-b9ab-09b67895ebcb
	I0916 11:07:02.395594   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.395599   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.395602   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.395605   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.395608   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.395834   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.396086   36333 pod_ready.go:93] pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.396098   36333 pod_ready.go:82] duration metric: took 5.175476ms for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.396106   36333 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.396152   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-736061
	I0916 11:07:02.396159   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.396165   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.396170   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.397858   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.397870   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.397881   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.397885   36333 round_trippers.go:580]     Audit-Id: 88d67fa8-09d3-4a9e-bb85-f562c62249ad
	I0916 11:07:02.397889   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.397891   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.397894   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.397900   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.398018   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-736061","namespace":"kube-system","uid":"f946773c-a82f-4e7e-8148-a81b41b27fa9","resourceVersion":"411","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.32:2379","kubernetes.io/config.hash":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.mirror":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.seen":"2024-09-16T11:05:53.622995492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6418 chars]
	I0916 11:07:02.398340   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.398350   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.398357   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.398361   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.399807   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.399820   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.399826   36333 round_trippers.go:580]     Audit-Id: 8d1e2ffe-08a7-405f-bbba-6b02f10eff4e
	I0916 11:07:02.399829   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.399832   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.399834   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.399837   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.399840   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.399968   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.400224   36333 pod_ready.go:93] pod "etcd-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.400236   36333 pod_ready.go:82] duration metric: took 4.124067ms for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.400248   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.400292   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-736061
	I0916 11:07:02.400300   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.400307   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.400310   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.402003   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.402016   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.402029   36333 round_trippers.go:580]     Audit-Id: 4ce8d656-7178-4a29-8e50-faeac8936832
	I0916 11:07:02.402033   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.402039   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.402043   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.402047   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.402056   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.402380   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-736061","namespace":"kube-system","uid":"bb6b837b-db0a-455d-8055-ec513f470220","resourceVersion":"408","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.32:8443","kubernetes.io/config.hash":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.mirror":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.seen":"2024-09-16T11:05:53.622989337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7637 chars]
	I0916 11:07:02.402722   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.402731   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.402738   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.402742   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.404373   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.404388   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.404393   36333 round_trippers.go:580]     Audit-Id: b4e955cd-6e9e-4b71-bdbd-d1481361c6d3
	I0916 11:07:02.404397   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.404400   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.404403   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.404408   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.404412   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.404511   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.404753   36333 pod_ready.go:93] pod "kube-apiserver-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.404765   36333 pod_ready.go:82] duration metric: took 4.50843ms for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.404772   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.404811   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-736061
	I0916 11:07:02.404818   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.404825   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.404827   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.406438   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.406453   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.406458   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.406462   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.406464   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.406468   36333 round_trippers.go:580]     Audit-Id: 2e229ef5-d2a5-45dc-b54b-8141e563aadf
	I0916 11:07:02.406472   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.406475   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.406943   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-736061","namespace":"kube-system","uid":"53bb4e69-605c-4160-bf0a-f26e83e16ab1","resourceVersion":"412","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.mirror":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.seen":"2024-09-16T11:05:53.622993259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7198 chars]
	I0916 11:07:02.407323   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.407337   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.407344   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.407347   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.408891   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.408903   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.408908   36333 round_trippers.go:580]     Audit-Id: 3b5a9222-dd01-4ed6-8b24-beacc2f78a04
	I0916 11:07:02.408911   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.408914   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.408916   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.408919   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.408923   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.409185   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.409431   36333 pod_ready.go:93] pod "kube-controller-manager-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.409444   36333 pod_ready.go:82] duration metric: took 4.666097ms for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.409453   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8h6jp" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.582842   36333 request.go:632] Waited for 173.330215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h6jp
	I0916 11:07:02.582930   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h6jp
	I0916 11:07:02.582936   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.582944   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.582953   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.585507   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.585526   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.585533   36333 round_trippers.go:580]     Audit-Id: 39bc33b9-b8f3-4c73-8e06-a64def0ea4b9
	I0916 11:07:02.585540   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.585549   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.585553   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.585558   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.585563   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.586152   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h6jp","generateName":"kube-proxy-","namespace":"kube-system","uid":"79ea467a-f17a-49de-8cbb-0f9952e21864","resourceVersion":"505","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"562d5386-4fc3-48d5-983a-19cdfbbddc77","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562d5386-4fc3-48d5-983a-19cdfbbddc77\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6154 chars]
	I0916 11:07:02.781886   36333 request.go:632] Waited for 195.300699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:02.781948   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:02.781954   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.781961   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.781965   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.784388   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.784406   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.784412   36333 round_trippers.go:580]     Audit-Id: 76125429-8c9e-453b-9d68-cfed320ca02a
	I0916 11:07:02.784416   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.784419   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.784422   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.784424   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.784427   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.784728   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"525","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3261 chars]
	I0916 11:07:02.784978   36333 pod_ready.go:93] pod "kube-proxy-8h6jp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.784993   36333 pod_ready.go:82] duration metric: took 375.534709ms for pod "kube-proxy-8h6jp" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.785002   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.982155   36333 request.go:632] Waited for 197.065012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftj9p
	I0916 11:07:02.982215   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftj9p
	I0916 11:07:02.982221   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.982229   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.982234   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.984638   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.984658   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.984666   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.984671   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.984675   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.984679   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.984691   36333 round_trippers.go:580]     Audit-Id: 24f355dc-64d1-4ce9-8a30-3620b98005e0
	I0916 11:07:02.984696   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.984963   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ftj9p","generateName":"kube-proxy-","namespace":"kube-system","uid":"fa72720f-1c4a-46a2-a733-f411ccb6f628","resourceVersion":"398","creationTimestamp":"2024-09-16T11:05:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"562d5386-4fc3-48d5-983a-19cdfbbddc77","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562d5386-4fc3-48d5-983a-19cdfbbddc77\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6141 chars]
	I0916 11:07:03.182768   36333 request.go:632] Waited for 197.351742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.182860   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.182868   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.182878   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.182883   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.185485   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:03.185505   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.185512   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.185515   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.185518   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.185520   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.185523   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.185525   36333 round_trippers.go:580]     Audit-Id: 4e664e60-eb62-4c62-9da6-17f315eecc83
	I0916 11:07:03.185788   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:03.186122   36333 pod_ready.go:93] pod "kube-proxy-ftj9p" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:03.186137   36333 pod_ready.go:82] duration metric: took 401.129059ms for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:03.186145   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:03.382173   36333 request.go:632] Waited for 195.968801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:07:03.382232   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:07:03.382237   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.382244   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.382247   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.384711   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:03.384728   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.384734   36333 round_trippers.go:580]     Audit-Id: 13eda5db-ca65-450b-8fdb-50f6b0c376c8
	I0916 11:07:03.384737   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.384740   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.384742   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.384745   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.384749   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.385194   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-736061","namespace":"kube-system","uid":"25a9a3ee-f264-4bd2-95fc-c8452bedc92b","resourceVersion":"413","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.mirror":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.seen":"2024-09-16T11:05:47.723827022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4937 chars]
	I0916 11:07:03.581823   36333 request.go:632] Waited for 196.278902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.581908   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.581916   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.581926   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.581932   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.584158   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:03.584181   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.584190   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.584197   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.584201   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.584204   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.584208   36333 round_trippers.go:580]     Audit-Id: d744394e-aee6-473a-b007-feba6b569bd1
	I0916 11:07:03.584212   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.584486   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:03.584811   36333 pod_ready.go:93] pod "kube-scheduler-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:03.584828   36333 pod_ready.go:82] duration metric: took 398.676655ms for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:03.584837   36333 pod_ready.go:39] duration metric: took 1.200068546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:07:03.584853   36333 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:07:03.584914   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:07:03.601367   36333 system_svc.go:56] duration metric: took 16.505305ms WaitForService to wait for kubelet
	I0916 11:07:03.601396   36333 kubeadm.go:582] duration metric: took 20.368159557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:07:03.601414   36333 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:07:03.782880   36333 request.go:632] Waited for 181.382248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes
	I0916 11:07:03.782956   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes
	I0916 11:07:03.782965   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.782975   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.782987   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.786179   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:07:03.786202   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.786210   36333 round_trippers.go:580]     Audit-Id: 567e7d78-c8dc-4af6-9bc6-93ac3ed4acdf
	I0916 11:07:03.786214   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.786218   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.786225   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.786231   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.786235   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.786493   36333 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10084 chars]
	I0916 11:07:03.786923   36333 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:07:03.786941   36333 node_conditions.go:123] node cpu capacity is 2
	I0916 11:07:03.786952   36333 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:07:03.786957   36333 node_conditions.go:123] node cpu capacity is 2
	I0916 11:07:03.786963   36333 node_conditions.go:105] duration metric: took 185.543392ms to run NodePressure ...
	I0916 11:07:03.786977   36333 start.go:241] waiting for startup goroutines ...
	I0916 11:07:03.787012   36333 start.go:255] writing updated cluster config ...
	I0916 11:07:03.787293   36333 ssh_runner.go:195] Run: rm -f paused
	I0916 11:07:03.796481   36333 out.go:177] * Done! kubectl is now configured to use "multinode-736061" cluster and "default" namespace by default
	E0916 11:07:03.797997   36333 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.266118219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484879266095742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d44f9e5f-9f4f-4b29-85bc-37ba145f72c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.266697522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03c588d0-c6e1-46ea-b2a3-98c29e38746b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.266750422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03c588d0-c6e1-46ea-b2a3-98c29e38746b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.266957742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03c588d0-c6e1-46ea-b2a3-98c29e38746b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.306371043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3a6e21e-ae7d-49b5-a76f-5237a31690ea name=/runtime.v1.RuntimeService/Version
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.306448289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3a6e21e-ae7d-49b5-a76f-5237a31690ea name=/runtime.v1.RuntimeService/Version
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.307700559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50c57576-af65-4b3d-8933-8993fefbe040 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.308074025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484879308052212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50c57576-af65-4b3d-8933-8993fefbe040 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.308741550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1614dd05-6563-49cd-bfd3-34c54faad372 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.308796780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1614dd05-6563-49cd-bfd3-34c54faad372 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.309168049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1614dd05-6563-49cd-bfd3-34c54faad372 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.348394642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ed94e1f-f0f6-4ba0-be73-8f1308540ea3 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.348466306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ed94e1f-f0f6-4ba0-be73-8f1308540ea3 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.350005910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f73afdcb-0bb3-41ff-922f-7cc45350014f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.350447954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484879350423925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f73afdcb-0bb3-41ff-922f-7cc45350014f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.351547854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa913a72-1de1-49c6-9161-b19214f55d1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.351619375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa913a72-1de1-49c6-9161-b19214f55d1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.351817899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa913a72-1de1-49c6-9161-b19214f55d1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.393632297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30f95364-9b23-439a-91ba-0569eee6342a name=/runtime.v1.RuntimeService/Version
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.393746881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30f95364-9b23-439a-91ba-0569eee6342a name=/runtime.v1.RuntimeService/Version
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.394888360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e25f3334-b848-4afc-91e2-dd38952b57b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.395267174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484879395243103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e25f3334-b848-4afc-91e2-dd38952b57b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.395854815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda05c1f-ae43-45cc-a023-ce2925709c1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.395907705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda05c1f-ae43-45cc-a023-ce2925709c1b name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:07:59 multinode-736061 crio[665]: time="2024-09-16 11:07:59.396085213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda05c1f-ae43-45cc-a023-ce2925709c1b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	84517e6af45b4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   53 seconds ago       Running             busybox                   0                   779060032a611       busybox-7dff88458-g9fqk
	840a587a0926e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   0                   19286465f900a       coredns-7c65d6cfc9-nlhl2
	02223ab182498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   01381d4d113d1       storage-provisioner
	7a89ff755837a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               0                   bd141ffff1a91       kindnet-qb4tq
	f8c55edbe2173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                0                   cc5264d1c4b52       kube-proxy-ftj9p
	b76d5d4ad419a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            0                   f771edf6fcef2       kube-scheduler-multinode-736061
	769a75ad1934a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      0                   6237db42cfa9d       etcd-multinode-736061
	d53f9aec7bc35       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Running             kube-controller-manager   0                   c1754b1d74547       kube-controller-manager-multinode-736061
	ed73e9089f633       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            0                   06f23871be821       kube-apiserver-multinode-736061
	
	
	==> coredns [840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd] <==
	[INFO] 10.244.1.2:57967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151977s
	[INFO] 10.244.0.3:38411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205732s
	[INFO] 10.244.0.3:48472 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859185s
	[INFO] 10.244.0.3:58999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160969s
	[INFO] 10.244.0.3:35408 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007258s
	[INFO] 10.244.0.3:41914 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221958s
	[INFO] 10.244.0.3:51441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075035s
	[INFO] 10.244.0.3:54367 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064081s
	[INFO] 10.244.0.3:51073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061874s
	[INFO] 10.244.1.2:38827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130826s
	[INFO] 10.244.1.2:49788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142283s
	[INFO] 10.244.1.2:43407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083078s
	[INFO] 10.244.1.2:35506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123825s
	[INFO] 10.244.0.3:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008958s
	[INFO] 10.244.0.3:44801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055108s
	[INFO] 10.244.0.3:45405 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039898s
	[INFO] 10.244.0.3:53790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037364s
	[INFO] 10.244.1.2:44863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136337s
	[INFO] 10.244.1.2:38345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000494388s
	[INFO] 10.244.1.2:36190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000247796s
	[INFO] 10.244.1.2:38755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120111s
	[INFO] 10.244.0.3:58238 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129373s
	[INFO] 10.244.0.3:55519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102337s
	[INFO] 10.244.0.3:60945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061359s
	[INFO] 10.244.0.3:52747 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010905s
	
	
	==> describe nodes <==
	Name:               multinode-736061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:07:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-736061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60fe80618d4f42e281d4c50393e9d89e
	  System UUID:                60fe8061-8d4f-42e2-81d4-c50393e9d89e
	  Boot ID:                    d046d280-229f-4e9a-8a6c-1986374da911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g9fqk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 coredns-7c65d6cfc9-nlhl2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m
	  kube-system                 etcd-multinode-736061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m6s
	  kube-system                 kindnet-qb4tq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m1s
	  kube-system                 kube-apiserver-multinode-736061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-multinode-736061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-ftj9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-multinode-736061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m1s                   node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  NodeReady                108s                   kubelet          Node multinode-736061 status is now: NodeReady
	
	
	Name:               multinode-736061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_06_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:06:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:07:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:06:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:06:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:06:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:07:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-736061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fe337504134150bccd557919449b29
	  System UUID:                d4fe3375-0413-4150-bccd-557919449b29
	  Boot ID:                    96a98313-f000-4116-9acc-f37a0a79851e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-754d4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kindnet-xlrxb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      77s
	  kube-system                 kube-proxy-8h6jp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  77s (x2 over 77s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x2 over 77s)  kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x2 over 77s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           76s                node-controller  Node multinode-736061-m02 event: Registered Node multinode-736061-m02 in Controller
	  Normal  NodeReady                58s                kubelet          Node multinode-736061-m02 status is now: NodeReady
	
	
	Name:               multinode-736061-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_07_36_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:07:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:07:55 +0000   Mon, 16 Sep 2024 11:07:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:07:55 +0000   Mon, 16 Sep 2024 11:07:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:07:55 +0000   Mon, 16 Sep 2024 11:07:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:07:55 +0000   Mon, 16 Sep 2024 11:07:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-736061-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 890f5eb3683144b2b6dc0b58be15768f
	  System UUID:                890f5eb3-6831-44b2-b6dc-0b58be15768f
	  Boot ID:                    9503ddcf-c293-4b15-825c-031cac2eeb92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bvqrg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23s
	  kube-system                 kube-proxy-5hctk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18s                kube-proxy       
	  Normal  CIDRAssignmentFailed     23s                cidrAllocator    Node multinode-736061-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  23s (x2 over 24s)  kubelet          Node multinode-736061-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 24s)  kubelet          Node multinode-736061-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 24s)  kubelet          Node multinode-736061-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21s                node-controller  Node multinode-736061-m03 event: Registered Node multinode-736061-m03 in Controller
	  Normal  NodeReady                4s                 kubelet          Node multinode-736061-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 11:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050701] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.798651] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.481620] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.570862] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.929227] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.065798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064029] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.188943] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.125437] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281577] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.899790] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.897000] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059824] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.997335] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.078309] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.139976] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.076513] kauditd_printk_skb: 18 callbacks suppressed
	[Sep16 11:06] kauditd_printk_skb: 69 callbacks suppressed
	[Sep16 11:07] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24] <==
	{"level":"info","ts":"2024-09-16T11:05:49.385500Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-736061 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:05:49.385662Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:05:49.386023Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.386158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:05:49.388969Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.389717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-16T11:05:49.389814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.389896Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.389930Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.390126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:05:49.390157Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:05:49.392766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.393463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:06:03.777149Z","caller":"traceutil/trace.go:171","msg":"trace[927915415] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"125.996547ms","start":"2024-09-16T11:06:03.651108Z","end":"2024-09-16T11:06:03.777104Z","steps":["trace[927915415] 'process raft request'  (duration: 125.663993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:06:42.434928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.290318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316539574759162275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" value_size:642 lease:7316539574759161296 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T11:06:42.435173Z","caller":"traceutil/trace.go:171","msg":"trace[736335181] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"242.745028ms","start":"2024-09-16T11:06:42.192402Z","end":"2024-09-16T11:06:42.435147Z","steps":["trace[736335181] 'process raft request'  (duration: 86.752839ms)","trace[736335181] 'compare'  (duration: 155.030741ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:06:42.435488Z","caller":"traceutil/trace.go:171","msg":"trace[1491776336] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"164.53116ms","start":"2024-09-16T11:06:42.270945Z","end":"2024-09-16T11:06:42.435476Z","steps":["trace[1491776336] 'process raft request'  (duration: 164.128437ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:36.191017Z","caller":"traceutil/trace.go:171","msg":"trace[1370350330] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"135.211812ms","start":"2024-09-16T11:07:36.055773Z","end":"2024-09-16T11:07:36.190985Z","steps":["trace[1370350330] 'read index received'  (duration: 127.332155ms)","trace[1370350330] 'applied index is now lower than readState.Index'  (duration: 7.878564ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:36.191190Z","caller":"traceutil/trace.go:171","msg":"trace[1606896706] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"230.440734ms","start":"2024-09-16T11:07:35.960732Z","end":"2024-09-16T11:07:36.191172Z","steps":["trace[1606896706] 'process raft request'  (duration: 222.394697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:07:36.191504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.712787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T11:07:36.191575Z","caller":"traceutil/trace.go:171","msg":"trace[641878152] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:0; response_revision:598; }","duration":"135.807158ms","start":"2024-09-16T11:07:36.055751Z","end":"2024-09-16T11:07:36.191558Z","steps":["trace[641878152] 'agreement among raft nodes before linearized reading'  (duration: 135.656463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:43.320131Z","caller":"traceutil/trace.go:171","msg":"trace[1026367264] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:677; }","duration":"256.510329ms","start":"2024-09-16T11:07:43.063604Z","end":"2024-09-16T11:07:43.320115Z","steps":["trace[1026367264] 'read index received'  (duration: 208.747621ms)","trace[1026367264] 'applied index is now lower than readState.Index'  (duration: 47.76201ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:43.320580Z","caller":"traceutil/trace.go:171","msg":"trace[845413732] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"283.063625ms","start":"2024-09-16T11:07:43.037497Z","end":"2024-09-16T11:07:43.320560Z","steps":["trace[845413732] 'process raft request'  (duration: 234.904981ms)","trace[845413732] 'compare'  (duration: 47.473062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:07:43.320947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.339861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-16T11:07:43.321022Z","caller":"traceutil/trace.go:171","msg":"trace[1372162398] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:1; response_revision:640; }","duration":"257.429414ms","start":"2024-09-16T11:07:43.063585Z","end":"2024-09-16T11:07:43.321014Z","steps":["trace[1372162398] 'agreement among raft nodes before linearized reading'  (duration: 257.097073ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:07:59 up 2 min,  0 users,  load average: 0.37, 0.27, 0.11
	Linux multinode-736061 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0] <==
	I0916 11:07:10.881648       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:07:10.881812       1 main.go:299] handling current node
	I0916 11:07:10.881875       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:07:10.881912       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:07:20.882137       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:07:20.882350       1 main.go:299] handling current node
	I0916 11:07:20.882431       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:07:20.882461       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:07:30.881052       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:07:30.881131       1 main.go:299] handling current node
	I0916 11:07:30.881145       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:07:30.881150       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:07:40.876864       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:07:40.876961       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:07:40.877156       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:07:40.877192       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	I0916 11:07:40.877255       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.60 Flags: [] Table: 0} 
	I0916 11:07:40.877620       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:07:40.877652       1 main.go:299] handling current node
	I0916 11:07:50.877918       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:07:50.878037       1 main.go:299] handling current node
	I0916 11:07:50.878063       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:07:50.878102       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:07:50.878265       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:07:50.878397       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7] <==
	I0916 11:05:52.165415       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:05:52.169921       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:05:52.169932       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:05:52.809057       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:05:52.859716       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:05:52.992808       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:05:53.012050       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.32]
	I0916 11:05:53.013006       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:05:53.027136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:05:53.217214       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:05:53.730360       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:05:53.742097       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:05:53.752008       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:05:58.672170       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:05:58.866528       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 11:07:07.434739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53462: use of closed network connection
	E0916 11:07:07.613512       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53474: use of closed network connection
	E0916 11:07:07.861059       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53488: use of closed network connection
	E0916 11:07:08.036468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53502: use of closed network connection
	E0916 11:07:08.198997       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53518: use of closed network connection
	E0916 11:07:08.379195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53544: use of closed network connection
	E0916 11:07:08.653676       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53564: use of closed network connection
	E0916 11:07:08.827028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53588: use of closed network connection
	E0916 11:07:08.989872       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53602: use of closed network connection
	E0916 11:07:09.164411       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53616: use of closed network connection
	
	
	==> kube-controller-manager [d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba] <==
	I0916 11:07:04.458194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.674µs"
	I0916 11:07:06.723754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.919119ms"
	I0916 11:07:06.723994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.222µs"
	I0916 11:07:07.023949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.070331ms"
	I0916 11:07:07.024029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.771µs"
	I0916 11:07:13.402975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:07:25.546624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061"
	I0916 11:07:36.311817       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:07:36.312398       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:07:36.338752       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.2.0/24"]
	I0916 11:07:36.338804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	E0916 11:07:36.375873       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.3.0/24"]
	E0916 11:07:36.376085       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03"
	E0916 11:07:36.376200       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-736061-m03': failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 11:07:36.376362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:36.382043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:36.564578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:36.899145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:38.043165       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-736061-m03"
	I0916 11:07:38.154625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:46.364908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:56.014374       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:07:56.014402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:56.025246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:58.061011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-proxy [f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:05:59.852422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:05:59.886836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:05:59.886976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:05:59.944125       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:05:59.944160       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:05:59.944181       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:05:59.947733       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:05:59.948149       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:05:59.948393       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:05:59.949794       1 config.go:199] "Starting service config controller"
	I0916 11:05:59.949862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:05:59.950230       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:05:59.950374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:05:59.950923       1 config.go:328] "Starting node config controller"
	I0916 11:05:59.952219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:06:00.050768       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:06:00.050862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:06:00.052567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762] <==
	W0916 11:05:52.226221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:05:52.226438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.286013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:05:52.286065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.292630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:05:52.292712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.303069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:05:52.303177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.308000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:05:52.308078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.326647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.326746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.367616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:05:52.367800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.407350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:05:52.407398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.423030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:05:52.423081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.501395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.501587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.597443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.597573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.652519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:05:52.652625       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:05:55.090829       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:06:53 multinode-736061 kubelet[1226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:06:53 multinode-736061 kubelet[1226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:06:53 multinode-736061 kubelet[1226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:07:03 multinode-736061 kubelet[1226]: E0916 11:07:03.719033    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484823718033762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:03 multinode-736061 kubelet[1226]: E0916 11:07:03.719426    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484823718033762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:04 multinode-736061 kubelet[1226]: I0916 11:07:04.425921    1226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nlhl2" podStartSLOduration=65.425887916 podStartE2EDuration="1m5.425887916s" podCreationTimestamp="2024-09-16 11:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:06:12.829389546 +0000 UTC m=+19.307311555" watchObservedRunningTime="2024-09-16 11:07:04.425887916 +0000 UTC m=+70.903809928"
	Sep 16 11:07:04 multinode-736061 kubelet[1226]: W0916 11:07:04.430025    1226 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-736061" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-736061' and this object
	Sep 16 11:07:04 multinode-736061 kubelet[1226]: E0916 11:07:04.430071    1226 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:multinode-736061\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'multinode-736061' and this object" logger="UnhandledError"
	Sep 16 11:07:04 multinode-736061 kubelet[1226]: I0916 11:07:04.478021    1226 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlwq\" (UniqueName: \"kubernetes.io/projected/0dd08783-fcfd-441f-8bda-c82c0c15173e-kube-api-access-4mlwq\") pod \"busybox-7dff88458-g9fqk\" (UID: \"0dd08783-fcfd-441f-8bda-c82c0c15173e\") " pod="default/busybox-7dff88458-g9fqk"
	Sep 16 11:07:07 multinode-736061 kubelet[1226]: I0916 11:07:07.014468    1226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-g9fqk" podStartSLOduration=2.180787973 podStartE2EDuration="3.014448315s" podCreationTimestamp="2024-09-16 11:07:04 +0000 UTC" firstStartedPulling="2024-09-16 11:07:05.475239625 +0000 UTC m=+71.953161616" lastFinishedPulling="2024-09-16 11:07:06.308899966 +0000 UTC m=+72.786821958" observedRunningTime="2024-09-16 11:07:07.013656797 +0000 UTC m=+73.491578806" watchObservedRunningTime="2024-09-16 11:07:07.014448315 +0000 UTC m=+73.492370325"
	Sep 16 11:07:13 multinode-736061 kubelet[1226]: E0916 11:07:13.722211    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484833721896310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:13 multinode-736061 kubelet[1226]: E0916 11:07:13.722246    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484833721896310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:23 multinode-736061 kubelet[1226]: E0916 11:07:23.723259    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484843722989186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:23 multinode-736061 kubelet[1226]: E0916 11:07:23.724200    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484843722989186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:33 multinode-736061 kubelet[1226]: E0916 11:07:33.726192    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484853725795872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:33 multinode-736061 kubelet[1226]: E0916 11:07:33.726261    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484853725795872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:43 multinode-736061 kubelet[1226]: E0916 11:07:43.729464    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484863727881449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:43 multinode-736061 kubelet[1226]: E0916 11:07:43.729812    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484863727881449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:53 multinode-736061 kubelet[1226]: E0916 11:07:53.716929    1226 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:07:53 multinode-736061 kubelet[1226]: E0916 11:07:53.730823    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484873730628806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:53 multinode-736061 kubelet[1226]: E0916 11:07:53.730844    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484873730628806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-736061 -n multinode-736061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (541.123µs)
helpers_test.go:263: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/MultiNodeLabels (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 node start m03 -v=7 --alsologtostderr
E0916 11:08:11.886740   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 node start m03 -v=7 --alsologtostderr: (38.819269955s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
multinode_test.go:306: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (498.563µs)
multinode_test.go:308: failed to kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-736061 -n multinode-736061
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 logs -n 25: (1.278689749s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-736061 cp multinode-736061:/home/docker/cp-test.txt                           | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test_multinode-736061_multinode-736061-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m03 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061_multinode-736061-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp testdata/cp-test.txt                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m02_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m03 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp testdata/cp-test.txt                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m03_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02:/home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m02 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-736061 node stop m03                                                          | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| node    | multinode-736061 node start                                                             | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:05:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:05:14.223845   36333 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:05:14.223984   36333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:14.223994   36333 out.go:358] Setting ErrFile to fd 2...
	I0916 11:05:14.223999   36333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:14.224200   36333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:05:14.224806   36333 out.go:352] Setting JSON to false
	I0916 11:05:14.225727   36333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2864,"bootTime":1726481850,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:05:14.225819   36333 start.go:139] virtualization: kvm guest
	I0916 11:05:14.228071   36333 out.go:177] * [multinode-736061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:05:14.229583   36333 notify.go:220] Checking for updates...
	I0916 11:05:14.229600   36333 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:05:14.231206   36333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:05:14.232749   36333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:14.234181   36333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:05:14.235512   36333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:05:14.236951   36333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:05:14.238333   36333 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:05:14.273835   36333 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 11:05:14.275189   36333 start.go:297] selected driver: kvm2
	I0916 11:05:14.275203   36333 start.go:901] validating driver "kvm2" against <nil>
	I0916 11:05:14.275215   36333 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:05:14.275970   36333 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:05:14.276060   36333 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:05:14.291713   36333 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:05:14.291764   36333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:05:14.292100   36333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:05:14.292145   36333 cni.go:84] Creating CNI manager for ""
	I0916 11:05:14.292195   36333 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 11:05:14.292207   36333 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:05:14.292273   36333 start.go:340] cluster config:
	{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:05:14.292418   36333 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:05:14.294221   36333 out.go:177] * Starting "multinode-736061" primary control-plane node in "multinode-736061" cluster
	I0916 11:05:14.295615   36333 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:05:14.295660   36333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:05:14.295673   36333 cache.go:56] Caching tarball of preloaded images
	I0916 11:05:14.295754   36333 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:05:14.295767   36333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:05:14.296098   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:05:14.296124   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json: {Name:mk24a1d206035e062b796738ad5d4a2fff193a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:14.296259   36333 start.go:360] acquireMachinesLock for multinode-736061: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:05:14.296287   36333 start.go:364] duration metric: took 15.67µs to acquireMachinesLock for "multinode-736061"
	I0916 11:05:14.296303   36333 start.go:93] Provisioning new machine with config: &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:05:14.296361   36333 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 11:05:14.298147   36333 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 11:05:14.298294   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:14.298341   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:14.313364   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0916 11:05:14.313819   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:14.314342   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:14.314361   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:14.314693   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:14.314921   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:14.315078   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:14.315233   36333 start.go:159] libmachine.API.Create for "multinode-736061" (driver="kvm2")
	I0916 11:05:14.315266   36333 client.go:168] LocalClient.Create starting
	I0916 11:05:14.315303   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 11:05:14.315351   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:05:14.315373   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:05:14.315435   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 11:05:14.315461   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:05:14.315487   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:05:14.315511   36333 main.go:141] libmachine: Running pre-create checks...
	I0916 11:05:14.315523   36333 main.go:141] libmachine: (multinode-736061) Calling .PreCreateCheck
	I0916 11:05:14.315987   36333 main.go:141] libmachine: (multinode-736061) Calling .GetConfigRaw
	I0916 11:05:14.316344   36333 main.go:141] libmachine: Creating machine...
	I0916 11:05:14.316359   36333 main.go:141] libmachine: (multinode-736061) Calling .Create
	I0916 11:05:14.316506   36333 main.go:141] libmachine: (multinode-736061) Creating KVM machine...
	I0916 11:05:14.317992   36333 main.go:141] libmachine: (multinode-736061) DBG | found existing default KVM network
	I0916 11:05:14.318708   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.318561   36356 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0916 11:05:14.318729   36333 main.go:141] libmachine: (multinode-736061) DBG | created network xml: 
	I0916 11:05:14.318738   36333 main.go:141] libmachine: (multinode-736061) DBG | <network>
	I0916 11:05:14.318744   36333 main.go:141] libmachine: (multinode-736061) DBG |   <name>mk-multinode-736061</name>
	I0916 11:05:14.318749   36333 main.go:141] libmachine: (multinode-736061) DBG |   <dns enable='no'/>
	I0916 11:05:14.318753   36333 main.go:141] libmachine: (multinode-736061) DBG |   
	I0916 11:05:14.318759   36333 main.go:141] libmachine: (multinode-736061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 11:05:14.318770   36333 main.go:141] libmachine: (multinode-736061) DBG |     <dhcp>
	I0916 11:05:14.318777   36333 main.go:141] libmachine: (multinode-736061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 11:05:14.318784   36333 main.go:141] libmachine: (multinode-736061) DBG |     </dhcp>
	I0916 11:05:14.318789   36333 main.go:141] libmachine: (multinode-736061) DBG |   </ip>
	I0916 11:05:14.318793   36333 main.go:141] libmachine: (multinode-736061) DBG |   
	I0916 11:05:14.318798   36333 main.go:141] libmachine: (multinode-736061) DBG | </network>
	I0916 11:05:14.318806   36333 main.go:141] libmachine: (multinode-736061) DBG | 
	I0916 11:05:14.323865   36333 main.go:141] libmachine: (multinode-736061) DBG | trying to create private KVM network mk-multinode-736061 192.168.39.0/24...
	I0916 11:05:14.391633   36333 main.go:141] libmachine: (multinode-736061) DBG | private KVM network mk-multinode-736061 192.168.39.0/24 created
	I0916 11:05:14.391667   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.391608   36356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:05:14.391700   36333 main.go:141] libmachine: (multinode-736061) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061 ...
	I0916 11:05:14.391716   36333 main.go:141] libmachine: (multinode-736061) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 11:05:14.391759   36333 main.go:141] libmachine: (multinode-736061) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 11:05:14.635189   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.635088   36356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa...
	I0916 11:05:14.708226   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.707982   36356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/multinode-736061.rawdisk...
	I0916 11:05:14.708249   36333 main.go:141] libmachine: (multinode-736061) DBG | Writing magic tar header
	I0916 11:05:14.708260   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061 (perms=drwx------)
	I0916 11:05:14.708270   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 11:05:14.708276   36333 main.go:141] libmachine: (multinode-736061) DBG | Writing SSH key tar header
	I0916 11:05:14.708283   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 11:05:14.708290   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 11:05:14.708296   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 11:05:14.708304   36333 main.go:141] libmachine: (multinode-736061) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 11:05:14.708308   36333 main.go:141] libmachine: (multinode-736061) Creating domain...
	I0916 11:05:14.708320   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:14.708097   36356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061 ...
	I0916 11:05:14.708327   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061
	I0916 11:05:14.708336   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 11:05:14.708342   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:05:14.708429   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 11:05:14.708467   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 11:05:14.708478   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home/jenkins
	I0916 11:05:14.708483   36333 main.go:141] libmachine: (multinode-736061) DBG | Checking permissions on dir: /home
	I0916 11:05:14.708491   36333 main.go:141] libmachine: (multinode-736061) DBG | Skipping /home - not owner
	I0916 11:05:14.709442   36333 main.go:141] libmachine: (multinode-736061) define libvirt domain using xml: 
	I0916 11:05:14.709458   36333 main.go:141] libmachine: (multinode-736061) <domain type='kvm'>
	I0916 11:05:14.709467   36333 main.go:141] libmachine: (multinode-736061)   <name>multinode-736061</name>
	I0916 11:05:14.709481   36333 main.go:141] libmachine: (multinode-736061)   <memory unit='MiB'>2200</memory>
	I0916 11:05:14.709490   36333 main.go:141] libmachine: (multinode-736061)   <vcpu>2</vcpu>
	I0916 11:05:14.709497   36333 main.go:141] libmachine: (multinode-736061)   <features>
	I0916 11:05:14.709504   36333 main.go:141] libmachine: (multinode-736061)     <acpi/>
	I0916 11:05:14.709518   36333 main.go:141] libmachine: (multinode-736061)     <apic/>
	I0916 11:05:14.709529   36333 main.go:141] libmachine: (multinode-736061)     <pae/>
	I0916 11:05:14.709536   36333 main.go:141] libmachine: (multinode-736061)     
	I0916 11:05:14.709543   36333 main.go:141] libmachine: (multinode-736061)   </features>
	I0916 11:05:14.709554   36333 main.go:141] libmachine: (multinode-736061)   <cpu mode='host-passthrough'>
	I0916 11:05:14.709564   36333 main.go:141] libmachine: (multinode-736061)   
	I0916 11:05:14.709571   36333 main.go:141] libmachine: (multinode-736061)   </cpu>
	I0916 11:05:14.709578   36333 main.go:141] libmachine: (multinode-736061)   <os>
	I0916 11:05:14.709588   36333 main.go:141] libmachine: (multinode-736061)     <type>hvm</type>
	I0916 11:05:14.709597   36333 main.go:141] libmachine: (multinode-736061)     <boot dev='cdrom'/>
	I0916 11:05:14.709610   36333 main.go:141] libmachine: (multinode-736061)     <boot dev='hd'/>
	I0916 11:05:14.709643   36333 main.go:141] libmachine: (multinode-736061)     <bootmenu enable='no'/>
	I0916 11:05:14.709666   36333 main.go:141] libmachine: (multinode-736061)   </os>
	I0916 11:05:14.709673   36333 main.go:141] libmachine: (multinode-736061)   <devices>
	I0916 11:05:14.709680   36333 main.go:141] libmachine: (multinode-736061)     <disk type='file' device='cdrom'>
	I0916 11:05:14.709698   36333 main.go:141] libmachine: (multinode-736061)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/boot2docker.iso'/>
	I0916 11:05:14.709713   36333 main.go:141] libmachine: (multinode-736061)       <target dev='hdc' bus='scsi'/>
	I0916 11:05:14.709725   36333 main.go:141] libmachine: (multinode-736061)       <readonly/>
	I0916 11:05:14.709734   36333 main.go:141] libmachine: (multinode-736061)     </disk>
	I0916 11:05:14.709746   36333 main.go:141] libmachine: (multinode-736061)     <disk type='file' device='disk'>
	I0916 11:05:14.709758   36333 main.go:141] libmachine: (multinode-736061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 11:05:14.709774   36333 main.go:141] libmachine: (multinode-736061)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/multinode-736061.rawdisk'/>
	I0916 11:05:14.709785   36333 main.go:141] libmachine: (multinode-736061)       <target dev='hda' bus='virtio'/>
	I0916 11:05:14.709801   36333 main.go:141] libmachine: (multinode-736061)     </disk>
	I0916 11:05:14.709829   36333 main.go:141] libmachine: (multinode-736061)     <interface type='network'>
	I0916 11:05:14.709842   36333 main.go:141] libmachine: (multinode-736061)       <source network='mk-multinode-736061'/>
	I0916 11:05:14.709853   36333 main.go:141] libmachine: (multinode-736061)       <model type='virtio'/>
	I0916 11:05:14.709863   36333 main.go:141] libmachine: (multinode-736061)     </interface>
	I0916 11:05:14.709873   36333 main.go:141] libmachine: (multinode-736061)     <interface type='network'>
	I0916 11:05:14.709890   36333 main.go:141] libmachine: (multinode-736061)       <source network='default'/>
	I0916 11:05:14.709904   36333 main.go:141] libmachine: (multinode-736061)       <model type='virtio'/>
	I0916 11:05:14.709916   36333 main.go:141] libmachine: (multinode-736061)     </interface>
	I0916 11:05:14.709923   36333 main.go:141] libmachine: (multinode-736061)     <serial type='pty'>
	I0916 11:05:14.709932   36333 main.go:141] libmachine: (multinode-736061)       <target port='0'/>
	I0916 11:05:14.709941   36333 main.go:141] libmachine: (multinode-736061)     </serial>
	I0916 11:05:14.709952   36333 main.go:141] libmachine: (multinode-736061)     <console type='pty'>
	I0916 11:05:14.709962   36333 main.go:141] libmachine: (multinode-736061)       <target type='serial' port='0'/>
	I0916 11:05:14.709970   36333 main.go:141] libmachine: (multinode-736061)     </console>
	I0916 11:05:14.709992   36333 main.go:141] libmachine: (multinode-736061)     <rng model='virtio'>
	I0916 11:05:14.710011   36333 main.go:141] libmachine: (multinode-736061)       <backend model='random'>/dev/random</backend>
	I0916 11:05:14.710020   36333 main.go:141] libmachine: (multinode-736061)     </rng>
	I0916 11:05:14.710028   36333 main.go:141] libmachine: (multinode-736061)     
	I0916 11:05:14.710036   36333 main.go:141] libmachine: (multinode-736061)     
	I0916 11:05:14.710044   36333 main.go:141] libmachine: (multinode-736061)   </devices>
	I0916 11:05:14.710055   36333 main.go:141] libmachine: (multinode-736061) </domain>
	I0916 11:05:14.710158   36333 main.go:141] libmachine: (multinode-736061) 
	I0916 11:05:14.714475   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:e4:3e:ff in network default
	I0916 11:05:14.715227   36333 main.go:141] libmachine: (multinode-736061) Ensuring networks are active...
	I0916 11:05:14.715242   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:14.715961   36333 main.go:141] libmachine: (multinode-736061) Ensuring network default is active
	I0916 11:05:14.716252   36333 main.go:141] libmachine: (multinode-736061) Ensuring network mk-multinode-736061 is active
	I0916 11:05:14.716836   36333 main.go:141] libmachine: (multinode-736061) Getting domain xml...
	I0916 11:05:14.717658   36333 main.go:141] libmachine: (multinode-736061) Creating domain...
	I0916 11:05:15.920598   36333 main.go:141] libmachine: (multinode-736061) Waiting to get IP...
	I0916 11:05:15.921389   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:15.921798   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:15.921861   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:15.921786   36356 retry.go:31] will retry after 223.192284ms: waiting for machine to come up
	I0916 11:05:16.146274   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:16.146739   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:16.146767   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:16.146689   36356 retry.go:31] will retry after 252.499488ms: waiting for machine to come up
	I0916 11:05:16.401280   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:16.401740   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:16.401759   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:16.401697   36356 retry.go:31] will retry after 482.760363ms: waiting for machine to come up
	I0916 11:05:16.886298   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:16.886830   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:16.886865   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:16.886749   36356 retry.go:31] will retry after 439.063598ms: waiting for machine to come up
	I0916 11:05:17.326932   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:17.327400   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:17.327423   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:17.327352   36356 retry.go:31] will retry after 505.8946ms: waiting for machine to come up
	I0916 11:05:17.835052   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:17.835477   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:17.835502   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:17.835432   36356 retry.go:31] will retry after 717.593659ms: waiting for machine to come up
	I0916 11:05:18.554420   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:18.554893   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:18.554930   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:18.554829   36356 retry.go:31] will retry after 1.016278613s: waiting for machine to come up
	I0916 11:05:19.572904   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:19.573341   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:19.573364   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:19.573302   36356 retry.go:31] will retry after 1.277341936s: waiting for machine to come up
	I0916 11:05:20.852855   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:20.853321   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:20.853351   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:20.853297   36356 retry.go:31] will retry after 1.793810706s: waiting for machine to come up
	I0916 11:05:22.649467   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:22.649908   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:22.649931   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:22.649869   36356 retry.go:31] will retry after 2.307737171s: waiting for machine to come up
	I0916 11:05:24.959386   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:24.959782   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:24.959810   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:24.959752   36356 retry.go:31] will retry after 1.783352311s: waiting for machine to come up
	I0916 11:05:26.745737   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:26.746182   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:26.746196   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:26.746148   36356 retry.go:31] will retry after 3.631719991s: waiting for machine to come up
	I0916 11:05:30.379263   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:30.379706   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:30.379735   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:30.379652   36356 retry.go:31] will retry after 2.815578177s: waiting for machine to come up
	I0916 11:05:33.198465   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:33.198966   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find current IP address of domain multinode-736061 in network mk-multinode-736061
	I0916 11:05:33.198991   36333 main.go:141] libmachine: (multinode-736061) DBG | I0916 11:05:33.198922   36356 retry.go:31] will retry after 3.799964021s: waiting for machine to come up
	I0916 11:05:37.002591   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.003027   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has current primary IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.003052   36333 main.go:141] libmachine: (multinode-736061) Found IP for machine: 192.168.39.32
	I0916 11:05:37.003065   36333 main.go:141] libmachine: (multinode-736061) Reserving static IP address...
	I0916 11:05:37.003449   36333 main.go:141] libmachine: (multinode-736061) DBG | unable to find host DHCP lease matching {name: "multinode-736061", mac: "52:54:00:c1:52:21", ip: "192.168.39.32"} in network mk-multinode-736061
	I0916 11:05:37.077828   36333 main.go:141] libmachine: (multinode-736061) DBG | Getting to WaitForSSH function...
	I0916 11:05:37.077850   36333 main.go:141] libmachine: (multinode-736061) Reserved static IP address: 192.168.39.32
	I0916 11:05:37.077862   36333 main.go:141] libmachine: (multinode-736061) Waiting for SSH to be available...
	I0916 11:05:37.080375   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.080796   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.080828   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.080993   36333 main.go:141] libmachine: (multinode-736061) DBG | Using SSH client type: external
	I0916 11:05:37.081020   36333 main.go:141] libmachine: (multinode-736061) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa (-rw-------)
	I0916 11:05:37.081050   36333 main.go:141] libmachine: (multinode-736061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:05:37.081062   36333 main.go:141] libmachine: (multinode-736061) DBG | About to run SSH command:
	I0916 11:05:37.081074   36333 main.go:141] libmachine: (multinode-736061) DBG | exit 0
	I0916 11:05:37.209566   36333 main.go:141] libmachine: (multinode-736061) DBG | SSH cmd err, output: <nil>: 
	I0916 11:05:37.209898   36333 main.go:141] libmachine: (multinode-736061) KVM machine creation complete!
	I0916 11:05:37.210248   36333 main.go:141] libmachine: (multinode-736061) Calling .GetConfigRaw
	I0916 11:05:37.210834   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:37.211040   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:37.211180   36333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 11:05:37.211197   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:37.212405   36333 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 11:05:37.212417   36333 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 11:05:37.212422   36333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 11:05:37.212451   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.214767   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.215122   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.215146   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.215270   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.215430   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.215573   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.215674   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.215811   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.215994   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.216004   36333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 11:05:37.324665   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:05:37.324687   36333 main.go:141] libmachine: Detecting the provisioner...
	I0916 11:05:37.324695   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.327356   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.327742   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.327765   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.327962   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.328147   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.328297   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.328424   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.328544   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.328712   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.328721   36333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 11:05:37.438637   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 11:05:37.438699   36333 main.go:141] libmachine: found compatible host: buildroot
	I0916 11:05:37.438705   36333 main.go:141] libmachine: Provisioning with buildroot...
	I0916 11:05:37.438712   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:37.438954   36333 buildroot.go:166] provisioning hostname "multinode-736061"
	I0916 11:05:37.438983   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:37.439145   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.441912   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.442287   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.442323   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.442444   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.442627   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.442759   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.442876   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.443043   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.443230   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.443245   36333 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061 && echo "multinode-736061" | sudo tee /etc/hostname
	I0916 11:05:37.568321   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:05:37.568348   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.571043   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.571306   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.571337   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.571508   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.571675   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.571803   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.571940   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.572158   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.572336   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.572359   36333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:05:37.690231   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:05:37.690283   36333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:05:37.690315   36333 buildroot.go:174] setting up certificates
	I0916 11:05:37.690325   36333 provision.go:84] configureAuth start
	I0916 11:05:37.690334   36333 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:05:37.690613   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:37.693814   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.694221   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.694249   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.694359   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.696453   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.696896   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.696929   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.697113   36333 provision.go:143] copyHostCerts
	I0916 11:05:37.697156   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:05:37.697191   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:05:37.697214   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:05:37.697279   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:05:37.697394   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:05:37.697428   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:05:37.697437   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:05:37.697468   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:05:37.697543   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:05:37.697565   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:05:37.697574   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:05:37.697603   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:05:37.697684   36333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-736061]
	I0916 11:05:37.755498   36333 provision.go:177] copyRemoteCerts
	I0916 11:05:37.755561   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:05:37.755585   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.758016   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.758372   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.758398   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.758541   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.758722   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.758852   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.758993   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:37.844283   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:05:37.844364   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:05:37.868824   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:05:37.868898   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:05:37.893315   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:05:37.893390   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:05:37.918259   36333 provision.go:87] duration metric: took 227.922707ms to configureAuth
	I0916 11:05:37.918284   36333 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:05:37.918465   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:05:37.918535   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:37.921204   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.921532   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:37.921571   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:37.921782   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:37.921968   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.922114   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:37.922246   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:37.922383   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:37.922547   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:37.922561   36333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:05:38.158725   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:05:38.158757   36333 main.go:141] libmachine: Checking connection to Docker...
	I0916 11:05:38.158768   36333 main.go:141] libmachine: (multinode-736061) Calling .GetURL
	I0916 11:05:38.159927   36333 main.go:141] libmachine: (multinode-736061) DBG | Using libvirt version 6000000
	I0916 11:05:38.162000   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.162328   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.162348   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.162524   36333 main.go:141] libmachine: Docker is up and running!
	I0916 11:05:38.162535   36333 main.go:141] libmachine: Reticulating splines...
	I0916 11:05:38.162541   36333 client.go:171] duration metric: took 23.847265768s to LocalClient.Create
	I0916 11:05:38.162563   36333 start.go:167] duration metric: took 23.847331794s to libmachine.API.Create "multinode-736061"
	I0916 11:05:38.162572   36333 start.go:293] postStartSetup for "multinode-736061" (driver="kvm2")
	I0916 11:05:38.162587   36333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:05:38.162609   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.162811   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:05:38.162832   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.165012   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.165330   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.165353   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.165518   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.165715   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.165857   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.166003   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:38.253609   36333 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:05:38.257916   36333 command_runner.go:130] > NAME=Buildroot
	I0916 11:05:38.257936   36333 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:05:38.257941   36333 command_runner.go:130] > ID=buildroot
	I0916 11:05:38.257946   36333 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:05:38.257951   36333 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:05:38.258214   36333 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:05:38.258231   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:05:38.258293   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:05:38.258382   36333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:05:38.258394   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:05:38.258480   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:05:38.270166   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:05:38.296379   36333 start.go:296] duration metric: took 133.789681ms for postStartSetup
	I0916 11:05:38.296431   36333 main.go:141] libmachine: (multinode-736061) Calling .GetConfigRaw
	I0916 11:05:38.297043   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:38.299668   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.300016   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.300042   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.300311   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:05:38.300500   36333 start.go:128] duration metric: took 24.004129957s to createHost
	I0916 11:05:38.300522   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.302695   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.302982   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.303009   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.303135   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.303315   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.303448   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.303555   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.303766   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:05:38.303988   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:05:38.304015   36333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:05:38.414103   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726484738.390254572
	
	I0916 11:05:38.414126   36333 fix.go:216] guest clock: 1726484738.390254572
	I0916 11:05:38.414133   36333 fix.go:229] Guest: 2024-09-16 11:05:38.390254572 +0000 UTC Remote: 2024-09-16 11:05:38.300511058 +0000 UTC m=+24.111459581 (delta=89.743514ms)
	I0916 11:05:38.414152   36333 fix.go:200] guest clock delta is within tolerance: 89.743514ms
	I0916 11:05:38.414156   36333 start.go:83] releasing machines lock for "multinode-736061", held for 24.117861591s
	I0916 11:05:38.414172   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.414417   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:38.416822   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.417114   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.417158   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.417310   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.417820   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.417984   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:38.418077   36333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:05:38.418117   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.418222   36333 ssh_runner.go:195] Run: cat /version.json
	I0916 11:05:38.418262   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:38.420987   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421076   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421362   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.421406   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:38.421430   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421445   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:38.421558   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.421704   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:38.421766   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.421888   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:38.421905   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.422061   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:38.422072   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:38.422199   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:38.498269   36333 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 11:05:38.498534   36333 ssh_runner.go:195] Run: systemctl --version
	I0916 11:05:38.525682   36333 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:05:38.525778   36333 command_runner.go:130] > systemd 252 (252)
	I0916 11:05:38.525815   36333 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 11:05:38.525931   36333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:05:38.683317   36333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:05:38.689797   36333 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:05:38.690100   36333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:05:38.690164   36333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:05:38.706222   36333 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0916 11:05:38.706278   36333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 11:05:38.706288   36333 start.go:495] detecting cgroup driver to use...
	I0916 11:05:38.706372   36333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:05:38.723218   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:05:38.737310   36333 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:05:38.737379   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:05:38.751153   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:05:38.765082   36333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:05:38.890954   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/cri-docker.socket".
	I0916 11:05:38.891373   36333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:05:38.910122   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 11:05:39.046826   36333 docker.go:233] disabling docker service ...
	I0916 11:05:39.046929   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:05:39.061765   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:05:39.074270   36333 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0916 11:05:39.074777   36333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:05:39.090011   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/docker.socket".
	I0916 11:05:39.201766   36333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:05:39.329477   36333 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0916 11:05:39.329506   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 11:05:39.329726   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:05:39.343852   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:05:39.362256   36333 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:05:39.362530   36333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:05:39.362586   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.373046   36333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:05:39.373113   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.383615   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.394098   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.404446   36333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:05:39.415178   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.425488   36333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.442620   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:05:39.453139   36333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:05:39.462440   36333 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:05:39.462485   36333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:05:39.462555   36333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 11:05:39.475750   36333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:05:39.485289   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:05:39.608605   36333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:05:39.700595   36333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:05:39.700670   36333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:05:39.705387   36333 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:05:39.705420   36333 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:05:39.705429   36333 command_runner.go:130] > Device: 0,22	Inode: 693         Links: 1
	I0916 11:05:39.705439   36333 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:05:39.705447   36333 command_runner.go:130] > Access: 2024-09-16 11:05:39.669540055 +0000
	I0916 11:05:39.705455   36333 command_runner.go:130] > Modify: 2024-09-16 11:05:39.669540055 +0000
	I0916 11:05:39.705462   36333 command_runner.go:130] > Change: 2024-09-16 11:05:39.669540055 +0000
	I0916 11:05:39.705468   36333 command_runner.go:130] >  Birth: -
	I0916 11:05:39.705523   36333 start.go:563] Will wait 60s for crictl version
	I0916 11:05:39.705595   36333 ssh_runner.go:195] Run: which crictl
	I0916 11:05:39.709396   36333 command_runner.go:130] > /usr/bin/crictl
	I0916 11:05:39.709459   36333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:05:39.755216   36333 command_runner.go:130] > Version:  0.1.0
	I0916 11:05:39.755237   36333 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:05:39.755241   36333 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:05:39.755246   36333 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:05:39.755263   36333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:05:39.755341   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:05:39.782225   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:05:39.782248   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:05:39.782254   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:05:39.782258   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:05:39.782261   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:05:39.782267   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:05:39.782271   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:05:39.782275   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:05:39.782281   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:05:39.782287   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:05:39.782301   36333 command_runner.go:130] > BuildTags:      
	I0916 11:05:39.782308   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:05:39.782315   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:05:39.782323   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:05:39.782328   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:05:39.782336   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:05:39.782349   36333 command_runner.go:130] >   seccomp
	I0916 11:05:39.782356   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:05:39.782360   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:05:39.782364   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:05:39.783475   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:05:39.810183   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:05:39.810214   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:05:39.810244   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:05:39.810252   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:05:39.810259   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:05:39.810274   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:05:39.810284   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:05:39.810291   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:05:39.810300   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:05:39.810310   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:05:39.810320   36333 command_runner.go:130] > BuildTags:      
	I0916 11:05:39.810330   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:05:39.810338   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:05:39.810348   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:05:39.810355   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:05:39.810366   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:05:39.810374   36333 command_runner.go:130] >   seccomp
	I0916 11:05:39.810384   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:05:39.810394   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:05:39.810403   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:05:39.813350   36333 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:05:39.814716   36333 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:05:39.817197   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:39.817500   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:39.817523   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:39.817727   36333 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:05:39.822032   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:05:39.834441   36333 kubeadm.go:883] updating cluster {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:05:39.834570   36333 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:05:39.834625   36333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:05:39.864349   36333 command_runner.go:130] > {
	I0916 11:05:39.864374   36333 command_runner.go:130] >   "images": [
	I0916 11:05:39.864379   36333 command_runner.go:130] >   ]
	I0916 11:05:39.864396   36333 command_runner.go:130] > }
	I0916 11:05:39.864661   36333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:05:39.864731   36333 ssh_runner.go:195] Run: which lz4
	I0916 11:05:39.868660   36333 command_runner.go:130] > /usr/bin/lz4
	I0916 11:05:39.868697   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 11:05:39.868790   36333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:05:39.872970   36333 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:05:39.873016   36333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:05:39.873041   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 11:05:41.190632   36333 crio.go:462] duration metric: took 1.321858637s to copy over tarball
	I0916 11:05:41.190715   36333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:05:43.168588   36333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977833737s)
	I0916 11:05:43.168613   36333 crio.go:469] duration metric: took 1.977949269s to extract the tarball
	I0916 11:05:43.168621   36333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:05:43.204999   36333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:05:43.248459   36333 command_runner.go:130] > {
	I0916 11:05:43.248479   36333 command_runner.go:130] >   "images": [
	I0916 11:05:43.248483   36333 command_runner.go:130] >     {
	I0916 11:05:43.248496   36333 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:05:43.248502   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248508   36333 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:05:43.248511   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248515   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248525   36333 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:05:43.248534   36333 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:05:43.248544   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248552   36333 command_runner.go:130] >       "size": "87190579",
	I0916 11:05:43.248556   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.248562   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248570   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248576   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248579   36333 command_runner.go:130] >     },
	I0916 11:05:43.248583   36333 command_runner.go:130] >     {
	I0916 11:05:43.248589   36333 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:05:43.248595   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248600   36333 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:05:43.248603   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248608   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248615   36333 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:05:43.248624   36333 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:05:43.248628   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248639   36333 command_runner.go:130] >       "size": "31470524",
	I0916 11:05:43.248645   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.248649   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248655   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248659   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248664   36333 command_runner.go:130] >     },
	I0916 11:05:43.248667   36333 command_runner.go:130] >     {
	I0916 11:05:43.248678   36333 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:05:43.248683   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248690   36333 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:05:43.248694   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248698   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248708   36333 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:05:43.248715   36333 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:05:43.248721   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248725   36333 command_runner.go:130] >       "size": "63273227",
	I0916 11:05:43.248729   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.248733   36333 command_runner.go:130] >       "username": "nonroot",
	I0916 11:05:43.248739   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248743   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248748   36333 command_runner.go:130] >     },
	I0916 11:05:43.248751   36333 command_runner.go:130] >     {
	I0916 11:05:43.248759   36333 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:05:43.248764   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248770   36333 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:05:43.248776   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248782   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248795   36333 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:05:43.248811   36333 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:05:43.248819   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248826   36333 command_runner.go:130] >       "size": "149009664",
	I0916 11:05:43.248834   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.248841   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.248849   36333 command_runner.go:130] >       },
	I0916 11:05:43.248855   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248864   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248870   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248877   36333 command_runner.go:130] >     },
	I0916 11:05:43.248883   36333 command_runner.go:130] >     {
	I0916 11:05:43.248894   36333 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:05:43.248902   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248912   36333 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:05:43.248917   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248921   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.248928   36333 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:05:43.248937   36333 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:05:43.248941   36333 command_runner.go:130] >       ],
	I0916 11:05:43.248945   36333 command_runner.go:130] >       "size": "95237600",
	I0916 11:05:43.248949   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.248953   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.248956   36333 command_runner.go:130] >       },
	I0916 11:05:43.248961   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.248965   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.248969   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.248974   36333 command_runner.go:130] >     },
	I0916 11:05:43.248977   36333 command_runner.go:130] >     {
	I0916 11:05:43.248983   36333 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:05:43.248990   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.248995   36333 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:05:43.248998   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249002   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249010   36333 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:05:43.249019   36333 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:05:43.249023   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249028   36333 command_runner.go:130] >       "size": "89437508",
	I0916 11:05:43.249032   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.249036   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.249041   36333 command_runner.go:130] >       },
	I0916 11:05:43.249048   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249054   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249064   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.249069   36333 command_runner.go:130] >     },
	I0916 11:05:43.249076   36333 command_runner.go:130] >     {
	I0916 11:05:43.249087   36333 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:05:43.249096   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.249104   36333 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:05:43.249113   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249119   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249151   36333 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:05:43.249166   36333 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:05:43.249171   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249176   36333 command_runner.go:130] >       "size": "92733849",
	I0916 11:05:43.249181   36333 command_runner.go:130] >       "uid": null,
	I0916 11:05:43.249188   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249194   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249202   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.249209   36333 command_runner.go:130] >     },
	I0916 11:05:43.249214   36333 command_runner.go:130] >     {
	I0916 11:05:43.249227   36333 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:05:43.249233   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.249243   36333 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:05:43.249249   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249257   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249277   36333 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:05:43.249291   36333 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:05:43.249300   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249310   36333 command_runner.go:130] >       "size": "68420934",
	I0916 11:05:43.249315   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.249325   36333 command_runner.go:130] >         "value": "0"
	I0916 11:05:43.249330   36333 command_runner.go:130] >       },
	I0916 11:05:43.249337   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249346   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249352   36333 command_runner.go:130] >       "pinned": false
	I0916 11:05:43.249359   36333 command_runner.go:130] >     },
	I0916 11:05:43.249364   36333 command_runner.go:130] >     {
	I0916 11:05:43.249377   36333 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:05:43.249388   36333 command_runner.go:130] >       "repoTags": [
	I0916 11:05:43.249397   36333 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:05:43.249405   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249411   36333 command_runner.go:130] >       "repoDigests": [
	I0916 11:05:43.249420   36333 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:05:43.249427   36333 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:05:43.249433   36333 command_runner.go:130] >       ],
	I0916 11:05:43.249436   36333 command_runner.go:130] >       "size": "742080",
	I0916 11:05:43.249440   36333 command_runner.go:130] >       "uid": {
	I0916 11:05:43.249445   36333 command_runner.go:130] >         "value": "65535"
	I0916 11:05:43.249448   36333 command_runner.go:130] >       },
	I0916 11:05:43.249452   36333 command_runner.go:130] >       "username": "",
	I0916 11:05:43.249456   36333 command_runner.go:130] >       "spec": null,
	I0916 11:05:43.249460   36333 command_runner.go:130] >       "pinned": true
	I0916 11:05:43.249463   36333 command_runner.go:130] >     }
	I0916 11:05:43.249466   36333 command_runner.go:130] >   ]
	I0916 11:05:43.249469   36333 command_runner.go:130] > }
	I0916 11:05:43.249620   36333 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:05:43.249638   36333 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:05:43.249647   36333 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 11:05:43.249752   36333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:05:43.249832   36333 ssh_runner.go:195] Run: crio config
	I0916 11:05:43.282902   36333 command_runner.go:130] ! time="2024-09-16 11:05:43.265188750Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 11:05:43.288223   36333 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 11:05:43.294413   36333 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 11:05:43.294444   36333 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 11:05:43.294454   36333 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 11:05:43.294460   36333 command_runner.go:130] > #
	I0916 11:05:43.294470   36333 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 11:05:43.294481   36333 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 11:05:43.294494   36333 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 11:05:43.294505   36333 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 11:05:43.294509   36333 command_runner.go:130] > # reload'.
	I0916 11:05:43.294515   36333 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 11:05:43.294523   36333 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 11:05:43.294557   36333 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 11:05:43.294566   36333 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 11:05:43.294570   36333 command_runner.go:130] > [crio]
	I0916 11:05:43.294576   36333 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 11:05:43.294584   36333 command_runner.go:130] > # containers images, in this directory.
	I0916 11:05:43.294589   36333 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 11:05:43.294599   36333 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 11:05:43.294604   36333 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 11:05:43.294615   36333 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 11:05:43.294620   36333 command_runner.go:130] > # imagestore = ""
	I0916 11:05:43.294631   36333 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 11:05:43.294637   36333 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 11:05:43.294643   36333 command_runner.go:130] > storage_driver = "overlay"
	I0916 11:05:43.294649   36333 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 11:05:43.294657   36333 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 11:05:43.294661   36333 command_runner.go:130] > storage_option = [
	I0916 11:05:43.294667   36333 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 11:05:43.294671   36333 command_runner.go:130] > ]
	I0916 11:05:43.294677   36333 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 11:05:43.294685   36333 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 11:05:43.294690   36333 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 11:05:43.294697   36333 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 11:05:43.294703   36333 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 11:05:43.294709   36333 command_runner.go:130] > # always happen on a node reboot
	I0916 11:05:43.294713   36333 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 11:05:43.294724   36333 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 11:05:43.294734   36333 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 11:05:43.294757   36333 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 11:05:43.294771   36333 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 11:05:43.294782   36333 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 11:05:43.294798   36333 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 11:05:43.294807   36333 command_runner.go:130] > # internal_wipe = true
	I0916 11:05:43.294817   36333 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 11:05:43.294831   36333 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 11:05:43.294838   36333 command_runner.go:130] > # internal_repair = false
	I0916 11:05:43.294844   36333 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 11:05:43.294852   36333 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 11:05:43.294857   36333 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 11:05:43.294868   36333 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 11:05:43.294876   36333 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 11:05:43.294880   36333 command_runner.go:130] > [crio.api]
	I0916 11:05:43.294885   36333 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 11:05:43.294890   36333 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 11:05:43.294895   36333 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 11:05:43.294902   36333 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 11:05:43.294908   36333 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 11:05:43.294915   36333 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 11:05:43.294919   36333 command_runner.go:130] > # stream_port = "0"
	I0916 11:05:43.294927   36333 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 11:05:43.294931   36333 command_runner.go:130] > # stream_enable_tls = false
	I0916 11:05:43.294939   36333 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 11:05:43.294943   36333 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 11:05:43.294951   36333 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 11:05:43.294957   36333 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 11:05:43.294963   36333 command_runner.go:130] > # minutes.
	I0916 11:05:43.294966   36333 command_runner.go:130] > # stream_tls_cert = ""
	I0916 11:05:43.294974   36333 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 11:05:43.294979   36333 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 11:05:43.294988   36333 command_runner.go:130] > # stream_tls_key = ""
	I0916 11:05:43.294994   36333 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 11:05:43.294999   36333 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 11:05:43.295023   36333 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 11:05:43.295029   36333 command_runner.go:130] > # stream_tls_ca = ""
	I0916 11:05:43.295036   36333 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:05:43.295042   36333 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 11:05:43.295049   36333 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:05:43.295060   36333 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 11:05:43.295066   36333 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 11:05:43.295074   36333 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 11:05:43.295078   36333 command_runner.go:130] > [crio.runtime]
	I0916 11:05:43.295083   36333 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 11:05:43.295089   36333 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 11:05:43.295093   36333 command_runner.go:130] > # "nofile=1024:2048"
	I0916 11:05:43.295099   36333 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 11:05:43.295105   36333 command_runner.go:130] > # default_ulimits = [
	I0916 11:05:43.295108   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295113   36333 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 11:05:43.295119   36333 command_runner.go:130] > # no_pivot = false
	I0916 11:05:43.295125   36333 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 11:05:43.295131   36333 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 11:05:43.295136   36333 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 11:05:43.295143   36333 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 11:05:43.295148   36333 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 11:05:43.295156   36333 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:05:43.295161   36333 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 11:05:43.295165   36333 command_runner.go:130] > # Cgroup setting for conmon
	I0916 11:05:43.295173   36333 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 11:05:43.295179   36333 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 11:05:43.295184   36333 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 11:05:43.295191   36333 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 11:05:43.295197   36333 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:05:43.295202   36333 command_runner.go:130] > conmon_env = [
	I0916 11:05:43.295207   36333 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:05:43.295213   36333 command_runner.go:130] > ]
	I0916 11:05:43.295218   36333 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 11:05:43.295224   36333 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 11:05:43.295230   36333 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 11:05:43.295235   36333 command_runner.go:130] > # default_env = [
	I0916 11:05:43.295239   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295248   36333 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 11:05:43.295257   36333 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 11:05:43.295260   36333 command_runner.go:130] > # selinux = false
	I0916 11:05:43.295267   36333 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 11:05:43.295274   36333 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 11:05:43.295279   36333 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 11:05:43.295284   36333 command_runner.go:130] > # seccomp_profile = ""
	I0916 11:05:43.295289   36333 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 11:05:43.295297   36333 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 11:05:43.295305   36333 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 11:05:43.295311   36333 command_runner.go:130] > # which might increase security.
	I0916 11:05:43.295315   36333 command_runner.go:130] > # This option is currently deprecated,
	I0916 11:05:43.295322   36333 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 11:05:43.295327   36333 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 11:05:43.295333   36333 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 11:05:43.295341   36333 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 11:05:43.295347   36333 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 11:05:43.295354   36333 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 11:05:43.295359   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.295364   36333 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 11:05:43.295369   36333 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 11:05:43.295375   36333 command_runner.go:130] > # the cgroup blockio controller.
	I0916 11:05:43.295379   36333 command_runner.go:130] > # blockio_config_file = ""
	I0916 11:05:43.295385   36333 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 11:05:43.295390   36333 command_runner.go:130] > # blockio parameters.
	I0916 11:05:43.295394   36333 command_runner.go:130] > # blockio_reload = false
	I0916 11:05:43.295400   36333 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 11:05:43.295406   36333 command_runner.go:130] > # irqbalance daemon.
	I0916 11:05:43.295410   36333 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 11:05:43.295416   36333 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 11:05:43.295423   36333 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 11:05:43.295429   36333 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 11:05:43.295436   36333 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 11:05:43.295445   36333 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 11:05:43.295452   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.295456   36333 command_runner.go:130] > # rdt_config_file = ""
	I0916 11:05:43.295463   36333 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 11:05:43.295467   36333 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 11:05:43.295499   36333 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 11:05:43.295507   36333 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 11:05:43.295513   36333 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 11:05:43.295518   36333 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 11:05:43.295522   36333 command_runner.go:130] > # will be added.
	I0916 11:05:43.295526   36333 command_runner.go:130] > # default_capabilities = [
	I0916 11:05:43.295530   36333 command_runner.go:130] > # 	"CHOWN",
	I0916 11:05:43.295534   36333 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 11:05:43.295538   36333 command_runner.go:130] > # 	"FSETID",
	I0916 11:05:43.295544   36333 command_runner.go:130] > # 	"FOWNER",
	I0916 11:05:43.295547   36333 command_runner.go:130] > # 	"SETGID",
	I0916 11:05:43.295550   36333 command_runner.go:130] > # 	"SETUID",
	I0916 11:05:43.295554   36333 command_runner.go:130] > # 	"SETPCAP",
	I0916 11:05:43.295558   36333 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 11:05:43.295561   36333 command_runner.go:130] > # 	"KILL",
	I0916 11:05:43.295564   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295573   36333 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 11:05:43.295582   36333 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 11:05:43.295586   36333 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 11:05:43.295594   36333 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 11:05:43.295600   36333 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:05:43.295606   36333 command_runner.go:130] > default_sysctls = [
	I0916 11:05:43.295610   36333 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 11:05:43.295615   36333 command_runner.go:130] > ]
	I0916 11:05:43.295621   36333 command_runner.go:130] > # List of devices on the host that a
	I0916 11:05:43.295627   36333 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 11:05:43.295633   36333 command_runner.go:130] > # allowed_devices = [
	I0916 11:05:43.295637   36333 command_runner.go:130] > # 	"/dev/fuse",
	I0916 11:05:43.295644   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295652   36333 command_runner.go:130] > # List of additional devices. specified as
	I0916 11:05:43.295658   36333 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 11:05:43.295666   36333 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 11:05:43.295671   36333 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:05:43.295677   36333 command_runner.go:130] > # additional_devices = [
	I0916 11:05:43.295681   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295685   36333 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 11:05:43.295691   36333 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 11:05:43.295694   36333 command_runner.go:130] > # 	"/etc/cdi",
	I0916 11:05:43.295698   36333 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 11:05:43.295701   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295709   36333 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 11:05:43.295714   36333 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 11:05:43.295720   36333 command_runner.go:130] > # Defaults to false.
	I0916 11:05:43.295724   36333 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 11:05:43.295732   36333 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 11:05:43.295745   36333 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 11:05:43.295754   36333 command_runner.go:130] > # hooks_dir = [
	I0916 11:05:43.295762   36333 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 11:05:43.295771   36333 command_runner.go:130] > # ]
	I0916 11:05:43.295781   36333 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 11:05:43.295793   36333 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 11:05:43.295804   36333 command_runner.go:130] > # its default mounts from the following two files:
	I0916 11:05:43.295809   36333 command_runner.go:130] > #
	I0916 11:05:43.295816   36333 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 11:05:43.295825   36333 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 11:05:43.295830   36333 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 11:05:43.295836   36333 command_runner.go:130] > #
	I0916 11:05:43.295841   36333 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 11:05:43.295847   36333 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 11:05:43.295855   36333 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 11:05:43.295864   36333 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 11:05:43.295874   36333 command_runner.go:130] > #
	I0916 11:05:43.295881   36333 command_runner.go:130] > # default_mounts_file = ""
	I0916 11:05:43.295886   36333 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 11:05:43.295893   36333 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 11:05:43.295898   36333 command_runner.go:130] > pids_limit = 1024
	I0916 11:05:43.295904   36333 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 11:05:43.295912   36333 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 11:05:43.295918   36333 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 11:05:43.295928   36333 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 11:05:43.295932   36333 command_runner.go:130] > # log_size_max = -1
	I0916 11:05:43.295941   36333 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 11:05:43.295945   36333 command_runner.go:130] > # log_to_journald = false
	I0916 11:05:43.295953   36333 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 11:05:43.295957   36333 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 11:05:43.295962   36333 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 11:05:43.295969   36333 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 11:05:43.295975   36333 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 11:05:43.295980   36333 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 11:05:43.295985   36333 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 11:05:43.295989   36333 command_runner.go:130] > # read_only = false
	I0916 11:05:43.295995   36333 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 11:05:43.296001   36333 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 11:05:43.296007   36333 command_runner.go:130] > # live configuration reload.
	I0916 11:05:43.296013   36333 command_runner.go:130] > # log_level = "info"
	I0916 11:05:43.296018   36333 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 11:05:43.296023   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.296026   36333 command_runner.go:130] > # log_filter = ""
	I0916 11:05:43.296032   36333 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 11:05:43.296042   36333 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 11:05:43.296045   36333 command_runner.go:130] > # separated by comma.
	I0916 11:05:43.296052   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296058   36333 command_runner.go:130] > # uid_mappings = ""
	I0916 11:05:43.296064   36333 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 11:05:43.296074   36333 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 11:05:43.296080   36333 command_runner.go:130] > # separated by comma.
	I0916 11:05:43.296088   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296094   36333 command_runner.go:130] > # gid_mappings = ""
	I0916 11:05:43.296100   36333 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 11:05:43.296108   36333 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:05:43.296116   36333 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:05:43.296125   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296129   36333 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 11:05:43.296134   36333 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 11:05:43.296140   36333 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:05:43.296148   36333 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:05:43.296156   36333 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:05:43.296162   36333 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 11:05:43.296168   36333 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 11:05:43.296176   36333 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 11:05:43.296181   36333 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 11:05:43.296186   36333 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 11:05:43.296191   36333 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 11:05:43.296199   36333 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 11:05:43.296203   36333 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 11:05:43.296210   36333 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 11:05:43.296214   36333 command_runner.go:130] > drop_infra_ctr = false
	I0916 11:05:43.296222   36333 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 11:05:43.296227   36333 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 11:05:43.296241   36333 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 11:05:43.296247   36333 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 11:05:43.296254   36333 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 11:05:43.296260   36333 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 11:05:43.296265   36333 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 11:05:43.296274   36333 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 11:05:43.296278   36333 command_runner.go:130] > # shared_cpuset = ""
	I0916 11:05:43.296285   36333 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 11:05:43.296294   36333 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 11:05:43.296300   36333 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 11:05:43.296307   36333 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 11:05:43.296313   36333 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 11:05:43.296318   36333 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 11:05:43.296326   36333 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 11:05:43.296330   36333 command_runner.go:130] > # enable_criu_support = false
	I0916 11:05:43.296336   36333 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 11:05:43.296356   36333 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 11:05:43.296366   36333 command_runner.go:130] > # enable_pod_events = false
	I0916 11:05:43.296372   36333 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:05:43.296380   36333 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:05:43.296386   36333 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 11:05:43.296390   36333 command_runner.go:130] > # default_runtime = "runc"
	I0916 11:05:43.296396   36333 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 11:05:43.296405   36333 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 11:05:43.296416   36333 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 11:05:43.296421   36333 command_runner.go:130] > # creation as a file is not desired either.
	I0916 11:05:43.296431   36333 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 11:05:43.296436   36333 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 11:05:43.296441   36333 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 11:05:43.296444   36333 command_runner.go:130] > # ]
	I0916 11:05:43.296450   36333 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 11:05:43.296458   36333 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 11:05:43.296464   36333 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 11:05:43.296471   36333 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 11:05:43.296474   36333 command_runner.go:130] > #
	I0916 11:05:43.296481   36333 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 11:05:43.296486   36333 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 11:05:43.296508   36333 command_runner.go:130] > # runtime_type = "oci"
	I0916 11:05:43.296513   36333 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 11:05:43.296517   36333 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 11:05:43.296522   36333 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 11:05:43.296526   36333 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 11:05:43.296530   36333 command_runner.go:130] > # monitor_env = []
	I0916 11:05:43.296534   36333 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 11:05:43.296538   36333 command_runner.go:130] > # allowed_annotations = []
	I0916 11:05:43.296543   36333 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 11:05:43.296546   36333 command_runner.go:130] > # Where:
	I0916 11:05:43.296550   36333 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 11:05:43.296556   36333 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 11:05:43.296561   36333 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 11:05:43.296567   36333 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 11:05:43.296570   36333 command_runner.go:130] > #   in $PATH.
	I0916 11:05:43.296576   36333 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 11:05:43.296580   36333 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 11:05:43.296586   36333 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 11:05:43.296589   36333 command_runner.go:130] > #   state.
	I0916 11:05:43.296595   36333 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 11:05:43.296600   36333 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 11:05:43.296606   36333 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 11:05:43.296611   36333 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 11:05:43.296618   36333 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 11:05:43.296624   36333 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 11:05:43.296628   36333 command_runner.go:130] > #   The currently recognized values are:
	I0916 11:05:43.296633   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 11:05:43.296640   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 11:05:43.296645   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 11:05:43.296650   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 11:05:43.296657   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 11:05:43.296662   36333 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 11:05:43.296668   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 11:05:43.296673   36333 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 11:05:43.296683   36333 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 11:05:43.296689   36333 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 11:05:43.296696   36333 command_runner.go:130] > #   deprecated option "conmon".
	I0916 11:05:43.296703   36333 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 11:05:43.296711   36333 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 11:05:43.296717   36333 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 11:05:43.296724   36333 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 11:05:43.296731   36333 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 11:05:43.296741   36333 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 11:05:43.296751   36333 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 11:05:43.296762   36333 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 11:05:43.296770   36333 command_runner.go:130] > #
	I0916 11:05:43.296776   36333 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 11:05:43.296784   36333 command_runner.go:130] > #
	I0916 11:05:43.296793   36333 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 11:05:43.296805   36333 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 11:05:43.296812   36333 command_runner.go:130] > #
	I0916 11:05:43.296819   36333 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 11:05:43.296827   36333 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 11:05:43.296830   36333 command_runner.go:130] > #
	I0916 11:05:43.296836   36333 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 11:05:43.296842   36333 command_runner.go:130] > # feature.
	I0916 11:05:43.296846   36333 command_runner.go:130] > #
	I0916 11:05:43.296851   36333 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 11:05:43.296860   36333 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 11:05:43.296869   36333 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 11:05:43.296877   36333 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 11:05:43.296883   36333 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 11:05:43.296889   36333 command_runner.go:130] > #
	I0916 11:05:43.296894   36333 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 11:05:43.296902   36333 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 11:05:43.296905   36333 command_runner.go:130] > #
	I0916 11:05:43.296911   36333 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 11:05:43.296917   36333 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 11:05:43.296920   36333 command_runner.go:130] > #
	I0916 11:05:43.296926   36333 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 11:05:43.296934   36333 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 11:05:43.296941   36333 command_runner.go:130] > # limitation.
	I0916 11:05:43.296949   36333 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 11:05:43.296954   36333 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 11:05:43.296959   36333 command_runner.go:130] > runtime_type = "oci"
	I0916 11:05:43.296964   36333 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 11:05:43.296968   36333 command_runner.go:130] > runtime_config_path = ""
	I0916 11:05:43.296972   36333 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 11:05:43.296976   36333 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 11:05:43.296980   36333 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 11:05:43.296983   36333 command_runner.go:130] > monitor_env = [
	I0916 11:05:43.296989   36333 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:05:43.296994   36333 command_runner.go:130] > ]
	I0916 11:05:43.296999   36333 command_runner.go:130] > privileged_without_host_devices = false
	I0916 11:05:43.297008   36333 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 11:05:43.297013   36333 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 11:05:43.297020   36333 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 11:05:43.297028   36333 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 11:05:43.297037   36333 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 11:05:43.297043   36333 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 11:05:43.297054   36333 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 11:05:43.297064   36333 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 11:05:43.297069   36333 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 11:05:43.297078   36333 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 11:05:43.297082   36333 command_runner.go:130] > # Example:
	I0916 11:05:43.297086   36333 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 11:05:43.297091   36333 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 11:05:43.297099   36333 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 11:05:43.297104   36333 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 11:05:43.297109   36333 command_runner.go:130] > # cpuset = 0
	I0916 11:05:43.297113   36333 command_runner.go:130] > # cpushares = "0-1"
	I0916 11:05:43.297117   36333 command_runner.go:130] > # Where:
	I0916 11:05:43.297122   36333 command_runner.go:130] > # The workload name is workload-type.
	I0916 11:05:43.297148   36333 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 11:05:43.297158   36333 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 11:05:43.297163   36333 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 11:05:43.297173   36333 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 11:05:43.297179   36333 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 11:05:43.297186   36333 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 11:05:43.297195   36333 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 11:05:43.297201   36333 command_runner.go:130] > # Default value is set to true
	I0916 11:05:43.297206   36333 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 11:05:43.297213   36333 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 11:05:43.297218   36333 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 11:05:43.297225   36333 command_runner.go:130] > # Default value is set to 'false'
	I0916 11:05:43.297229   36333 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 11:05:43.297235   36333 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 11:05:43.297240   36333 command_runner.go:130] > #
	I0916 11:05:43.297246   36333 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 11:05:43.297252   36333 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 11:05:43.297260   36333 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 11:05:43.297266   36333 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 11:05:43.297273   36333 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 11:05:43.297277   36333 command_runner.go:130] > [crio.image]
	I0916 11:05:43.297285   36333 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 11:05:43.297290   36333 command_runner.go:130] > # default_transport = "docker://"
	I0916 11:05:43.297297   36333 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 11:05:43.297303   36333 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:05:43.297309   36333 command_runner.go:130] > # global_auth_file = ""
	I0916 11:05:43.297315   36333 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 11:05:43.297323   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.297328   36333 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 11:05:43.297336   36333 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 11:05:43.297342   36333 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:05:43.297349   36333 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:05:43.297353   36333 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 11:05:43.297361   36333 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 11:05:43.297367   36333 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 11:05:43.297375   36333 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 11:05:43.297381   36333 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 11:05:43.297386   36333 command_runner.go:130] > # pause_command = "/pause"
	I0916 11:05:43.297392   36333 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 11:05:43.297400   36333 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 11:05:43.297405   36333 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 11:05:43.297413   36333 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 11:05:43.297423   36333 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 11:05:43.297451   36333 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 11:05:43.297461   36333 command_runner.go:130] > # pinned_images = [
	I0916 11:05:43.297465   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297474   36333 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 11:05:43.297480   36333 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 11:05:43.297488   36333 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 11:05:43.297493   36333 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 11:05:43.297499   36333 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 11:05:43.297504   36333 command_runner.go:130] > # signature_policy = ""
	I0916 11:05:43.297509   36333 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 11:05:43.297518   36333 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 11:05:43.297524   36333 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 11:05:43.297532   36333 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 11:05:43.297537   36333 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 11:05:43.297544   36333 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 11:05:43.297551   36333 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 11:05:43.297559   36333 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 11:05:43.297563   36333 command_runner.go:130] > # changing them here.
	I0916 11:05:43.297571   36333 command_runner.go:130] > # insecure_registries = [
	I0916 11:05:43.297574   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297580   36333 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 11:05:43.297587   36333 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 11:05:43.297591   36333 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 11:05:43.297599   36333 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 11:05:43.297604   36333 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 11:05:43.297611   36333 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 11:05:43.297615   36333 command_runner.go:130] > # CNI plugins.
	I0916 11:05:43.297621   36333 command_runner.go:130] > [crio.network]
	I0916 11:05:43.297627   36333 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 11:05:43.297634   36333 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 11:05:43.297638   36333 command_runner.go:130] > # cni_default_network = ""
	I0916 11:05:43.297646   36333 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 11:05:43.297650   36333 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 11:05:43.297656   36333 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 11:05:43.297660   36333 command_runner.go:130] > # plugin_dirs = [
	I0916 11:05:43.297664   36333 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 11:05:43.297669   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297674   36333 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 11:05:43.297681   36333 command_runner.go:130] > [crio.metrics]
	I0916 11:05:43.297688   36333 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 11:05:43.297692   36333 command_runner.go:130] > enable_metrics = true
	I0916 11:05:43.297698   36333 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 11:05:43.297703   36333 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 11:05:43.297712   36333 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 11:05:43.297718   36333 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 11:05:43.297725   36333 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 11:05:43.297733   36333 command_runner.go:130] > # metrics_collectors = [
	I0916 11:05:43.297742   36333 command_runner.go:130] > # 	"operations",
	I0916 11:05:43.297750   36333 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 11:05:43.297759   36333 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 11:05:43.297765   36333 command_runner.go:130] > # 	"operations_errors",
	I0916 11:05:43.297774   36333 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 11:05:43.297781   36333 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 11:05:43.297790   36333 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 11:05:43.297797   36333 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 11:05:43.297805   36333 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 11:05:43.297813   36333 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 11:05:43.297822   36333 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 11:05:43.297829   36333 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 11:05:43.297838   36333 command_runner.go:130] > # 	"containers_oom_total",
	I0916 11:05:43.297842   36333 command_runner.go:130] > # 	"containers_oom",
	I0916 11:05:43.297847   36333 command_runner.go:130] > # 	"processes_defunct",
	I0916 11:05:43.297850   36333 command_runner.go:130] > # 	"operations_total",
	I0916 11:05:43.297855   36333 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 11:05:43.297859   36333 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 11:05:43.297869   36333 command_runner.go:130] > # 	"operations_errors_total",
	I0916 11:05:43.297873   36333 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 11:05:43.297881   36333 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 11:05:43.297885   36333 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 11:05:43.297893   36333 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 11:05:43.297897   36333 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 11:05:43.297904   36333 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 11:05:43.297909   36333 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 11:05:43.297913   36333 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 11:05:43.297918   36333 command_runner.go:130] > # ]
	I0916 11:05:43.297923   36333 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 11:05:43.297929   36333 command_runner.go:130] > # metrics_port = 9090
	I0916 11:05:43.297934   36333 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 11:05:43.297939   36333 command_runner.go:130] > # metrics_socket = ""
	I0916 11:05:43.297944   36333 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 11:05:43.297952   36333 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 11:05:43.297959   36333 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 11:05:43.297966   36333 command_runner.go:130] > # certificate on any modification event.
	I0916 11:05:43.297973   36333 command_runner.go:130] > # metrics_cert = ""
	I0916 11:05:43.297981   36333 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 11:05:43.297986   36333 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 11:05:43.297990   36333 command_runner.go:130] > # metrics_key = ""
	I0916 11:05:43.297996   36333 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 11:05:43.298002   36333 command_runner.go:130] > [crio.tracing]
	I0916 11:05:43.298008   36333 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 11:05:43.298014   36333 command_runner.go:130] > # enable_tracing = false
	I0916 11:05:43.298019   36333 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 11:05:43.298025   36333 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 11:05:43.298033   36333 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 11:05:43.298039   36333 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 11:05:43.298044   36333 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 11:05:43.298048   36333 command_runner.go:130] > [crio.nri]
	I0916 11:05:43.298053   36333 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 11:05:43.298056   36333 command_runner.go:130] > # enable_nri = false
	I0916 11:05:43.298062   36333 command_runner.go:130] > # NRI socket to listen on.
	I0916 11:05:43.298066   36333 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 11:05:43.298071   36333 command_runner.go:130] > # NRI plugin directory to use.
	I0916 11:05:43.298078   36333 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 11:05:43.298083   36333 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 11:05:43.298087   36333 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 11:05:43.298095   36333 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 11:05:43.298099   36333 command_runner.go:130] > # nri_disable_connections = false
	I0916 11:05:43.298104   36333 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 11:05:43.298111   36333 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 11:05:43.298115   36333 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 11:05:43.298122   36333 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 11:05:43.298128   36333 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 11:05:43.298133   36333 command_runner.go:130] > [crio.stats]
	I0916 11:05:43.298139   36333 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 11:05:43.298144   36333 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 11:05:43.298150   36333 command_runner.go:130] > # stats_collection_period = 0
	I0916 11:05:43.298215   36333 cni.go:84] Creating CNI manager for ""
	I0916 11:05:43.298228   36333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 11:05:43.298236   36333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:05:43.298254   36333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-736061 NodeName:multinode-736061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:05:43.298407   36333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-736061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:05:43.298467   36333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:05:43.308375   36333 command_runner.go:130] > kubeadm
	I0916 11:05:43.308392   36333 command_runner.go:130] > kubectl
	I0916 11:05:43.308396   36333 command_runner.go:130] > kubelet
	I0916 11:05:43.308508   36333 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:05:43.308570   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:05:43.318000   36333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:05:43.334695   36333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:05:43.350760   36333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 11:05:43.366756   36333 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:05:43.370611   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:05:43.382490   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:05:43.510558   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:05:43.528417   36333 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.32
	I0916 11:05:43.528444   36333 certs.go:194] generating shared ca certs ...
	I0916 11:05:43.528466   36333 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.528645   36333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:05:43.528700   36333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:05:43.528713   36333 certs.go:256] generating profile certs ...
	I0916 11:05:43.528800   36333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key
	I0916 11:05:43.528826   36333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt with IP's: []
	I0916 11:05:43.729416   36333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt ...
	I0916 11:05:43.729446   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt: {Name:mk8f058dbeacc08c17d1e4d4c54c153a31a8caee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.729636   36333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key ...
	I0916 11:05:43.729650   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key: {Name:mkc3de41a13f2c6c9c924ff3cb124609a6d349f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.729767   36333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7
	I0916 11:05:43.729783   36333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.32]
	I0916 11:05:43.861692   36333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7 ...
	I0916 11:05:43.861719   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7: {Name:mk3e4089705238a6c72c6f29c7550cbd35936edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.861904   36333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7 ...
	I0916 11:05:43.861919   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7: {Name:mkad0f3937bad034c0343c60b3da1c1794454e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:43.862010   36333 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt.7afb17c7 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt
	I0916 11:05:43.862103   36333 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key
	I0916 11:05:43.862162   36333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key
	I0916 11:05:43.862183   36333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt with IP's: []
	I0916 11:05:44.050019   36333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt ...
	I0916 11:05:44.050048   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt: {Name:mk3b6c74bc98a230d388dd16ad4b67cc884de8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:44.050238   36333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key ...
	I0916 11:05:44.050254   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key: {Name:mkee3aab4cdf8bbb9a371865ef6e113e6462af42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:44.050350   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:05:44.050371   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:05:44.050382   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:05:44.050397   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:05:44.050417   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:05:44.050430   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:05:44.050444   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:05:44.050456   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:05:44.050511   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:05:44.050545   36333 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:05:44.050554   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:05:44.050586   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:05:44.050609   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:05:44.050633   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:05:44.050668   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:05:44.050697   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.050710   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.050722   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.051333   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:05:44.077633   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:05:44.101785   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:05:44.128823   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:05:44.156392   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:05:44.179929   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:05:44.203535   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:05:44.227189   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:05:44.250918   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:05:44.277716   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:05:44.329554   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:05:44.358993   36333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:05:44.377345   36333 ssh_runner.go:195] Run: openssl version
	I0916 11:05:44.383416   36333 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:05:44.383498   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:05:44.396088   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.400714   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.400847   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.400904   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:05:44.407039   36333 command_runner.go:130] > 51391683
	I0916 11:05:44.407109   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:05:44.419996   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:05:44.432349   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.436946   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.437139   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.437184   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:05:44.442874   36333 command_runner.go:130] > 3ec20f2e
	I0916 11:05:44.442970   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:05:44.453913   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:05:44.464659   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.468904   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.468988   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.469033   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:05:44.474667   36333 command_runner.go:130] > b5213941
	I0916 11:05:44.474740   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:05:44.485950   36333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:05:44.490371   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:05:44.490515   36333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:05:44.490566   36333 kubeadm.go:392] StartCluster: {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:05:44.490640   36333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:05:44.490708   36333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:05:44.535122   36333 cri.go:89] found id: ""
	I0916 11:05:44.535203   36333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:05:44.546090   36333 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0916 11:05:44.546125   36333 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0916 11:05:44.546135   36333 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0916 11:05:44.546204   36333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:05:44.556199   36333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:05:44.565563   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0916 11:05:44.565585   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0916 11:05:44.565595   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0916 11:05:44.565604   36333 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:05:44.565755   36333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:05:44.565772   36333 kubeadm.go:157] found existing configuration files:
	
	I0916 11:05:44.565812   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:05:44.574749   36333 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:05:44.575055   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:05:44.575123   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:05:44.585225   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:05:44.594397   36333 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:05:44.594433   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:05:44.594478   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:05:44.603915   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:05:44.613089   36333 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:05:44.613146   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:05:44.613191   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:05:44.622763   36333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:05:44.631746   36333 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:05:44.631781   36333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:05:44.631819   36333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:05:44.641202   36333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 11:05:44.747391   36333 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:05:44.747417   36333 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0916 11:05:44.747516   36333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:05:44.747541   36333 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 11:05:44.862710   36333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:05:44.862744   36333 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:05:44.862861   36333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:05:44.862876   36333 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:05:44.862983   36333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:05:44.863005   36333 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:05:44.877710   36333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:05:44.877748   36333 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:05:44.907323   36333 out.go:235]   - Generating certificates and keys ...
	I0916 11:05:44.907438   36333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:05:44.907468   36333 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0916 11:05:44.907541   36333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:05:44.907552   36333 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0916 11:05:45.035664   36333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:05:45.035693   36333 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:05:45.218565   36333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:05:45.218596   36333 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:05:45.351291   36333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:05:45.351337   36333 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0916 11:05:45.553568   36333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:05:45.553613   36333 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0916 11:05:45.685418   36333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:05:45.685442   36333 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0916 11:05:45.685614   36333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:45.685627   36333 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:45.801840   36333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:05:45.801877   36333 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0916 11:05:45.801985   36333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:45.802012   36333 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-736061] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 11:05:46.076784   36333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:05:46.076815   36333 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:05:46.134172   36333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:05:46.134194   36333 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:05:46.325794   36333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:05:46.325818   36333 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0916 11:05:46.325935   36333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:05:46.325946   36333 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:05:46.462234   36333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:05:46.462264   36333 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:05:46.727042   36333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:05:46.727083   36333 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:05:46.906186   36333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:05:46.906213   36333 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:05:47.000241   36333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:05:47.000265   36333 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:05:47.248611   36333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:05:47.248639   36333 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:05:47.249247   36333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:05:47.249258   36333 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:05:47.252675   36333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:05:47.252747   36333 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:05:47.254412   36333 out.go:235]   - Booting up control plane ...
	I0916 11:05:47.254521   36333 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:05:47.254538   36333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:05:47.254643   36333 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:05:47.254643   36333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:05:47.255099   36333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:05:47.255121   36333 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:05:47.273458   36333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:05:47.273489   36333 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:05:47.279831   36333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:05:47.279864   36333 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:05:47.279914   36333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:05:47.279927   36333 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 11:05:47.422884   36333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:05:47.422909   36333 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:05:47.423022   36333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:05:47.423047   36333 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:05:47.923943   36333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262524ms
	I0916 11:05:47.923988   36333 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.262524ms
	I0916 11:05:47.924094   36333 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:05:47.924109   36333 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:05:52.922100   36333 kubeadm.go:310] [api-check] The API server is healthy after 5.001231198s
	I0916 11:05:52.922128   36333 command_runner.go:130] > [api-check] The API server is healthy after 5.001231198s
	I0916 11:05:52.933714   36333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:05:52.933741   36333 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:05:52.953998   36333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:05:52.954031   36333 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:05:52.985743   36333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:05:52.985770   36333 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:05:52.985983   36333 kubeadm.go:310] [mark-control-plane] Marking the node multinode-736061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:05:52.985998   36333 command_runner.go:130] > [mark-control-plane] Marking the node multinode-736061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:05:52.999952   36333 kubeadm.go:310] [bootstrap-token] Using token: tyssfx.qcouw8my23ympzkv
	I0916 11:05:53.000087   36333 command_runner.go:130] > [bootstrap-token] Using token: tyssfx.qcouw8my23ympzkv
	I0916 11:05:53.001595   36333 out.go:235]   - Configuring RBAC rules ...
	I0916 11:05:53.001758   36333 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:05:53.001785   36333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:05:53.012385   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:05:53.012416   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:05:53.022559   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:05:53.022587   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:05:53.028226   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:05:53.028231   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:05:53.035375   36333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:05:53.035407   36333 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:05:53.040052   36333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:05:53.040069   36333 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:05:53.328732   36333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:05:53.328767   36333 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:05:53.754122   36333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:05:53.754158   36333 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0916 11:05:54.327404   36333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:05:54.327432   36333 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0916 11:05:54.328424   36333 kubeadm.go:310] 
	I0916 11:05:54.328532   36333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:05:54.328552   36333 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0916 11:05:54.328557   36333 kubeadm.go:310] 
	I0916 11:05:54.328657   36333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:05:54.328664   36333 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0916 11:05:54.328681   36333 kubeadm.go:310] 
	I0916 11:05:54.328719   36333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:05:54.328729   36333 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0916 11:05:54.328780   36333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:05:54.328787   36333 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:05:54.328835   36333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:05:54.328860   36333 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:05:54.328866   36333 kubeadm.go:310] 
	I0916 11:05:54.328929   36333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:05:54.328937   36333 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0916 11:05:54.328941   36333 kubeadm.go:310] 
	I0916 11:05:54.329001   36333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:05:54.329009   36333 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:05:54.329012   36333 kubeadm.go:310] 
	I0916 11:05:54.329055   36333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:05:54.329061   36333 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0916 11:05:54.329136   36333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:05:54.329154   36333 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:05:54.329260   36333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:05:54.329274   36333 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:05:54.329277   36333 kubeadm.go:310] 
	I0916 11:05:54.329353   36333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:05:54.329361   36333 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:05:54.329449   36333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:05:54.329457   36333 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0916 11:05:54.329462   36333 kubeadm.go:310] 
	I0916 11:05:54.329568   36333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.329580   36333 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.329726   36333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 11:05:54.329736   36333 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 11:05:54.329758   36333 kubeadm.go:310] 	--control-plane 
	I0916 11:05:54.329764   36333 command_runner.go:130] > 	--control-plane 
	I0916 11:05:54.329767   36333 kubeadm.go:310] 
	I0916 11:05:54.329843   36333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:05:54.329850   36333 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:05:54.329853   36333 kubeadm.go:310] 
	I0916 11:05:54.329968   36333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.329971   36333 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tyssfx.qcouw8my23ympzkv \
	I0916 11:05:54.330130   36333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:05:54.330143   36333 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:05:54.330941   36333 kubeadm.go:310] W0916 11:05:44.723101     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.330957   36333 command_runner.go:130] ! W0916 11:05:44.723101     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.331226   36333 kubeadm.go:310] W0916 11:05:44.725335     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.331228   36333 command_runner.go:130] ! W0916 11:05:44.725335     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:05:54.331382   36333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:05:54.331396   36333 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:05:54.331417   36333 cni.go:84] Creating CNI manager for ""
	I0916 11:05:54.331427   36333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 11:05:54.333262   36333 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:05:54.334526   36333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:05:54.340284   36333 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0916 11:05:54.340308   36333 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0916 11:05:54.340317   36333 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0916 11:05:54.340327   36333 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:05:54.340337   36333 command_runner.go:130] > Access: 2024-09-16 11:05:26.103603942 +0000
	I0916 11:05:54.340345   36333 command_runner.go:130] > Modify: 2024-09-15 21:28:20.000000000 +0000
	I0916 11:05:54.340355   36333 command_runner.go:130] > Change: 2024-09-16 11:05:25.044603942 +0000
	I0916 11:05:54.340361   36333 command_runner.go:130] >  Birth: -
	I0916 11:05:54.340599   36333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:05:54.340617   36333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:05:54.360934   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:05:54.710752   36333 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0916 11:05:54.716515   36333 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0916 11:05:54.726048   36333 command_runner.go:130] > serviceaccount/kindnet created
	I0916 11:05:54.751499   36333 command_runner.go:130] > daemonset.apps/kindnet created
	I0916 11:05:54.753798   36333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:05:54.753870   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:54.753926   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-736061 minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-736061 minikube.k8s.io/primary=true
	I0916 11:05:54.939817   36333 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0916 11:05:54.941588   36333 command_runner.go:130] > -16
	I0916 11:05:54.941628   36333 ops.go:34] apiserver oom_adj: -16
	I0916 11:05:54.941660   36333 command_runner.go:130] > node/multinode-736061 labeled
	I0916 11:05:54.941717   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:55.027489   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:55.442711   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:55.524147   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:55.942606   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:56.021989   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:56.442339   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:56.534060   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:56.942115   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:57.041821   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:57.442012   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:57.523969   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:57.942694   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:58.040196   36333 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 11:05:58.442038   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:05:58.522384   36333 command_runner.go:130] > NAME      SECRETS   AGE
	I0916 11:05:58.522412   36333 command_runner.go:130] > default   0         0s
	I0916 11:05:58.522441   36333 kubeadm.go:1113] duration metric: took 3.768643152s to wait for elevateKubeSystemPrivileges
	I0916 11:05:58.522464   36333 kubeadm.go:394] duration metric: took 14.031900459s to StartCluster
	I0916 11:05:58.522485   36333 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:58.522567   36333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:58.523262   36333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:05:58.523525   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:05:58.523520   36333 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:05:58.523543   36333 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:05:58.523619   36333 addons.go:69] Setting storage-provisioner=true in profile "multinode-736061"
	I0916 11:05:58.523647   36333 addons.go:234] Setting addon storage-provisioner=true in "multinode-736061"
	I0916 11:05:58.523673   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:05:58.523693   36333 addons.go:69] Setting default-storageclass=true in profile "multinode-736061"
	I0916 11:05:58.523717   36333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-736061"
	I0916 11:05:58.523734   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:05:58.524218   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.524240   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.524262   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.524281   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.525936   36333 out.go:177] * Verifying Kubernetes components...
	I0916 11:05:58.527179   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:05:58.539793   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
	I0916 11:05:58.540028   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0916 11:05:58.540272   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.540458   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.540804   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.540823   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.540958   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.540987   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.541195   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.541325   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.541377   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:58.541901   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.541951   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.543606   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:58.544015   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:05:58.544628   36333 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 11:05:58.544973   36333 addons.go:234] Setting addon default-storageclass=true in "multinode-736061"
	I0916 11:05:58.545031   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:05:58.545490   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.545542   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.557074   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0916 11:05:58.557614   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.558120   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.558149   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.558453   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.558657   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:58.560363   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:58.560786   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0916 11:05:58.561244   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.561732   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.561752   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.562052   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.562299   36333 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:05:58.562544   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:05:58.562584   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:58.563649   36333 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:05:58.563664   36333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:05:58.563678   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:58.566428   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.566878   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:58.566899   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.567094   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:58.567244   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:58.567416   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:58.567558   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:58.578318   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0916 11:05:58.578831   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:58.579328   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:05:58.579355   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:58.579724   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:58.579916   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:05:58.581511   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:05:58.581688   36333 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:05:58.581705   36333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:05:58.581721   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:05:58.584387   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.584795   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:05:58.584822   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:05:58.585076   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:05:58.585261   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:05:58.585414   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:05:58.585539   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:05:58.800148   36333 command_runner.go:130] > apiVersion: v1
	I0916 11:05:58.800166   36333 command_runner.go:130] > data:
	I0916 11:05:58.800171   36333 command_runner.go:130] >   Corefile: |
	I0916 11:05:58.800175   36333 command_runner.go:130] >     .:53 {
	I0916 11:05:58.800179   36333 command_runner.go:130] >         errors
	I0916 11:05:58.800189   36333 command_runner.go:130] >         health {
	I0916 11:05:58.800193   36333 command_runner.go:130] >            lameduck 5s
	I0916 11:05:58.800197   36333 command_runner.go:130] >         }
	I0916 11:05:58.800200   36333 command_runner.go:130] >         ready
	I0916 11:05:58.800206   36333 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0916 11:05:58.800210   36333 command_runner.go:130] >            pods insecure
	I0916 11:05:58.800219   36333 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0916 11:05:58.800225   36333 command_runner.go:130] >            ttl 30
	I0916 11:05:58.800229   36333 command_runner.go:130] >         }
	I0916 11:05:58.800235   36333 command_runner.go:130] >         prometheus :9153
	I0916 11:05:58.800241   36333 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0916 11:05:58.800248   36333 command_runner.go:130] >            max_concurrent 1000
	I0916 11:05:58.800252   36333 command_runner.go:130] >         }
	I0916 11:05:58.800256   36333 command_runner.go:130] >         cache 30
	I0916 11:05:58.800261   36333 command_runner.go:130] >         loop
	I0916 11:05:58.800265   36333 command_runner.go:130] >         reload
	I0916 11:05:58.800277   36333 command_runner.go:130] >         loadbalance
	I0916 11:05:58.800283   36333 command_runner.go:130] >     }
	I0916 11:05:58.800287   36333 command_runner.go:130] > kind: ConfigMap
	I0916 11:05:58.800293   36333 command_runner.go:130] > metadata:
	I0916 11:05:58.800299   36333 command_runner.go:130] >   creationTimestamp: "2024-09-16T11:05:53Z"
	I0916 11:05:58.800305   36333 command_runner.go:130] >   name: coredns
	I0916 11:05:58.800309   36333 command_runner.go:130] >   namespace: kube-system
	I0916 11:05:58.800315   36333 command_runner.go:130] >   resourceVersion: "263"
	I0916 11:05:58.800320   36333 command_runner.go:130] >   uid: 4270379f-2cdb-424c-8d1c-8cef3fbc1be2
	I0916 11:05:58.801884   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:05:58.802043   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:05:58.815336   36333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:05:58.891364   36333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:05:59.485098   36333 command_runner.go:130] > configmap/coredns replaced
	I0916 11:05:59.485159   36333 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 11:05:59.485436   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:59.485601   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:05:59.485674   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:05:59.486489   36333 node_ready.go:35] waiting up to 6m0s for node "multinode-736061" to be "Ready" ...
	I0916 11:05:59.486632   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:05:59.486649   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.486661   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.486666   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.486287   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:05:59.487369   36333 round_trippers.go:463] GET https://192.168.39.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 11:05:59.487380   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.487389   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.487394   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.497532   36333 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 11:05:59.497552   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.497559   36333 round_trippers.go:580]     Audit-Id: 57da509c-6519-4ee3-847d-028f592687fb
	I0916 11:05:59.497564   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.497567   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.497572   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.497576   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.497581   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.497591   36333 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 11:05:59.497612   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.497622   36333 round_trippers.go:580]     Audit-Id: 8e7d26e1-602e-4054-9b7f-2d6446de0b3f
	I0916 11:05:59.497631   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.497637   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.497641   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.497645   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.497649   36333 round_trippers.go:580]     Content-Length: 291
	I0916 11:05:59.497653   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.497678   36333 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"372","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.497678   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:05:59.498175   36333 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"372","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.498237   36333 round_trippers.go:463] PUT https://192.168.39.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 11:05:59.498250   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.498260   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.498267   36333 round_trippers.go:473]     Content-Type: application/json
	I0916 11:05:59.498274   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.514260   36333 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0916 11:05:59.514283   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.514291   36333 round_trippers.go:580]     Content-Length: 291
	I0916 11:05:59.514296   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.514301   36333 round_trippers.go:580]     Audit-Id: f0c4a321-e721-4c80-b252-c799fd24f8a6
	I0916 11:05:59.514305   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.514312   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.514316   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.514324   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.514348   36333 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"375","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.719171   36333 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0916 11:05:59.719211   36333 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0916 11:05:59.719223   36333 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 11:05:59.719234   36333 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 11:05:59.719242   36333 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0916 11:05:59.719250   36333 command_runner.go:130] > pod/storage-provisioner created
	I0916 11:05:59.719332   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719335   36333 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0916 11:05:59.719350   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.719408   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719424   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.719662   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.719680   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.719690   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719696   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.719803   36333 main.go:141] libmachine: (multinode-736061) DBG | Closing plugin on server side
	I0916 11:05:59.719931   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.719941   36333 main.go:141] libmachine: (multinode-736061) DBG | Closing plugin on server side
	I0916 11:05:59.719931   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.719949   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.719959   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.719968   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.719980   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.720022   36333 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 11:05:59.720039   36333 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 11:05:59.720121   36333 round_trippers.go:463] GET https://192.168.39.32:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 11:05:59.720133   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.720142   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.720147   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.720272   36333 main.go:141] libmachine: (multinode-736061) DBG | Closing plugin on server side
	I0916 11:05:59.720303   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.720322   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.747501   36333 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0916 11:05:59.747530   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.747541   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.747550   36333 round_trippers.go:580]     Content-Length: 1273
	I0916 11:05:59.747556   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.747562   36333 round_trippers.go:580]     Audit-Id: d7c07b43-28c5-4953-a526-e208840d0bf1
	I0916 11:05:59.747570   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.747575   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.747580   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.747646   36333 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"standard","uid":"2e216119-f9bf-406a-8caf-ccd62e391ad9","resourceVersion":"373","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 11:05:59.748172   36333 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2e216119-f9bf-406a-8caf-ccd62e391ad9","resourceVersion":"373","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 11:05:59.748237   36333 round_trippers.go:463] PUT https://192.168.39.32:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 11:05:59.748251   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.748263   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.748274   36333 round_trippers.go:473]     Content-Type: application/json
	I0916 11:05:59.748277   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.755909   36333 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 11:05:59.755926   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.755933   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.755940   36333 round_trippers.go:580]     Audit-Id: 6fca5072-c17b-4828-b4b1-61318ae38bdd
	I0916 11:05:59.755944   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.755947   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.755950   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.755952   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.755955   36333 round_trippers.go:580]     Content-Length: 1220
	I0916 11:05:59.756344   36333 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2e216119-f9bf-406a-8caf-ccd62e391ad9","resourceVersion":"373","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 11:05:59.756500   36333 main.go:141] libmachine: Making call to close driver server
	I0916 11:05:59.756513   36333 main.go:141] libmachine: (multinode-736061) Calling .Close
	I0916 11:05:59.756804   36333 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:05:59.756824   36333 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:05:59.759532   36333 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:05:59.760887   36333 addons.go:510] duration metric: took 1.237340935s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:05:59.987419   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:05:59.987440   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.987448   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.987451   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.987451   36333 round_trippers.go:463] GET https://192.168.39.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 11:05:59.987466   36333 round_trippers.go:469] Request Headers:
	I0916 11:05:59.987473   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:05:59.987478   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:05:59.991451   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:05:59.991468   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.991474   36333 round_trippers.go:580]     Audit-Id: 72acc2e2-b4e5-4697-bf5d-615bfb8f6957
	I0916 11:05:59.991478   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.991482   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.991484   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.991487   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.991491   36333 round_trippers.go:580]     Content-Length: 291
	I0916 11:05:59.991494   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.991526   36333 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e448e131-79e9-4a70-9834-6f03d90ad906","resourceVersion":"387","creationTimestamp":"2024-09-16T11:05:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0916 11:05:59.991621   36333 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:05:59.991635   36333 round_trippers.go:577] Response Headers:
	I0916 11:05:59.991632   36333 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-736061" context rescaled to 1 replicas
	I0916 11:05:59.991641   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:05:59.991649   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:05:59 GMT
	I0916 11:05:59.991655   36333 round_trippers.go:580]     Audit-Id: 8446c329-c81e-482c-bae0-8d3c38d2017c
	I0916 11:05:59.991661   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:05:59.991665   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:05:59.991671   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:05:59.992432   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:00.487109   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:00.487132   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:00.487140   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:00.487144   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:00.489258   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:00.489277   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:00.489284   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:00.489288   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:00.489292   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:00.489295   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:00 GMT
	I0916 11:06:00.489299   36333 round_trippers.go:580]     Audit-Id: dfddf752-0d04-4305-ad98-f8a57cb9a8d8
	I0916 11:06:00.489301   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:00.489488   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:00.987071   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:00.987103   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:00.987115   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:00.987122   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:00.989579   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:00.989606   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:00.989615   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:00.989621   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:00.989624   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:00 GMT
	I0916 11:06:00.989628   36333 round_trippers.go:580]     Audit-Id: ef493024-3937-4c6a-bdca-60ec81f985da
	I0916 11:06:00.989632   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:00.989635   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:00.989904   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:01.487632   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:01.487659   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:01.487667   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:01.487672   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:01.490092   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:01.490114   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:01.490124   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:01.490130   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:01.490133   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:01 GMT
	I0916 11:06:01.490138   36333 round_trippers.go:580]     Audit-Id: 5505198a-c028-4145-b51a-cd97c7cec6c4
	I0916 11:06:01.490141   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:01.490146   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:01.490628   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:01.490960   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:01.987316   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:01.987338   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:01.987345   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:01.987351   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:01.989439   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:01.989459   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:01.989466   36333 round_trippers.go:580]     Audit-Id: 817ae4c2-fcb8-4774-9a9c-a78e4be55e5f
	I0916 11:06:01.989470   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:01.989473   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:01.989476   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:01.989478   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:01.989481   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:01 GMT
	I0916 11:06:01.989765   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:02.487551   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:02.487579   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:02.487589   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:02.487594   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:02.489907   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:02.489927   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:02.489933   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:02.489938   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:02 GMT
	I0916 11:06:02.489942   36333 round_trippers.go:580]     Audit-Id: 9285bd1c-7f94-479f-a712-64acc704f792
	I0916 11:06:02.489945   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:02.489947   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:02.489950   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:02.490113   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:02.986767   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:02.986794   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:02.986803   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:02.986810   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:02.989573   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:02.989595   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:02.989604   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:02 GMT
	I0916 11:06:02.989609   36333 round_trippers.go:580]     Audit-Id: 71e7d7d7-c757-4da4-8f35-28d9a1af9890
	I0916 11:06:02.989616   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:02.989619   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:02.989624   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:02.989633   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:02.989888   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:03.487642   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:03.487673   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:03.487682   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:03.487687   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:03.490180   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:03.490204   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:03.490212   36333 round_trippers.go:580]     Audit-Id: 5a86f0df-e279-4a9c-9f33-781c240a2bac
	I0916 11:06:03.490218   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:03.490224   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:03.490230   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:03.490233   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:03.490239   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:03 GMT
	I0916 11:06:03.490474   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:03.986796   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:03.986825   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:03.986835   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:03.986840   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:03.989029   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:03.989053   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:03.989063   36333 round_trippers.go:580]     Audit-Id: f79cc23e-ebd6-44a1-b6b4-cf5372ec80d3
	I0916 11:06:03.989068   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:03.989071   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:03.989076   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:03.989081   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:03.989088   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:03 GMT
	I0916 11:06:03.989489   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:03.989811   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:04.486947   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:04.486971   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:04.486978   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:04.486982   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:04.489465   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:04.489489   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:04.489497   36333 round_trippers.go:580]     Audit-Id: 8519e7fd-5549-46cb-94a0-32291abba761
	I0916 11:06:04.489505   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:04.489510   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:04.489514   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:04.489518   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:04.489522   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:04 GMT
	I0916 11:06:04.489656   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:04.987364   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:04.987388   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:04.987396   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:04.987406   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:04.991734   36333 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:06:04.991753   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:04.991762   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:04.991766   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:04 GMT
	I0916 11:06:04.991771   36333 round_trippers.go:580]     Audit-Id: db5c630f-b6f0-4fba-940f-7996c5ab68cb
	I0916 11:06:04.991774   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:04.991780   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:04.991786   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:04.992082   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:05.486761   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:05.486793   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:05.486805   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:05.486811   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:05.489240   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:05.489265   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:05.489274   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:05.489278   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:05.489282   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:05.489290   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:05.489294   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:05 GMT
	I0916 11:06:05.489302   36333 round_trippers.go:580]     Audit-Id: be5a71c6-457d-461d-9016-5e17f8f04417
	I0916 11:06:05.489636   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:05.987365   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:05.987396   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:05.987406   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:05.987411   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:05.991584   36333 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:06:05.991626   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:05.991636   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:05.991642   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:05.991646   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:05.991651   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:05.991661   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:05 GMT
	I0916 11:06:05.991670   36333 round_trippers.go:580]     Audit-Id: 2787614e-7bd3-4207-a64c-26fcb2f30e01
	I0916 11:06:05.992014   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:05.992326   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:06.486673   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:06.486696   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:06.486704   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:06.486708   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:06.489003   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:06.489022   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:06.489028   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:06 GMT
	I0916 11:06:06.489034   36333 round_trippers.go:580]     Audit-Id: 1bab1392-2e50-4b08-9fcd-126827677cf1
	I0916 11:06:06.489038   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:06.489041   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:06.489044   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:06.489048   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:06.489211   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:06.986897   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:06.986925   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:06.986933   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:06.986938   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:06.989348   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:06.989366   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:06.989373   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:06.989377   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:06 GMT
	I0916 11:06:06.989381   36333 round_trippers.go:580]     Audit-Id: ddc217d2-c0a0-4a60-9d85-6682e01f5be1
	I0916 11:06:06.989383   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:06.989386   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:06.989388   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:06.989805   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:07.487559   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:07.487588   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:07.487599   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:07.487614   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:07.489978   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:07.490000   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:07.490005   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:07.490010   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:07.490013   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:07 GMT
	I0916 11:06:07.490015   36333 round_trippers.go:580]     Audit-Id: 2ff4adf4-0ccb-4cbf-a188-e20d8dcecc95
	I0916 11:06:07.490018   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:07.490021   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:07.490210   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:07.986811   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:07.986837   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:07.986845   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:07.986850   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:07.989482   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:07.989506   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:07.989516   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:07.989522   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:07.989532   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:07.989542   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:07.989547   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:07 GMT
	I0916 11:06:07.989553   36333 round_trippers.go:580]     Audit-Id: be2bd1c2-688f-40fb-9e29-7d4baf1d4654
	I0916 11:06:07.990113   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:08.486779   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:08.486806   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:08.486815   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:08.486819   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:08.489051   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:08.489074   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:08.489083   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:08.489089   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:08.489094   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:08.489102   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:08 GMT
	I0916 11:06:08.489106   36333 round_trippers.go:580]     Audit-Id: 8519d36f-a62e-45e3-b8ae-d90629f2435e
	I0916 11:06:08.489112   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:08.489268   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:08.489650   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:08.987730   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:08.987762   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:08.987771   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:08.987777   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:08.989978   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:08.989996   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:08.990002   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:08.990006   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:08.990009   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:08.990012   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:08.990015   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:08 GMT
	I0916 11:06:08.990018   36333 round_trippers.go:580]     Audit-Id: 37ec693c-03a0-4b67-82e2-a82071c8839b
	I0916 11:06:08.990204   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:09.487593   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:09.487619   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:09.487642   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:09.487648   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:09.490067   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:09.490092   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:09.490100   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:09 GMT
	I0916 11:06:09.490104   36333 round_trippers.go:580]     Audit-Id: 4c864097-2ec0-4fbc-9956-643a33be7206
	I0916 11:06:09.490108   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:09.490111   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:09.490114   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:09.490118   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:09.490462   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:09.987129   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:09.987160   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:09.987171   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:09.987178   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:09.989467   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:09.989489   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:09.989498   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:09.989503   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:09.989507   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:09 GMT
	I0916 11:06:09.989514   36333 round_trippers.go:580]     Audit-Id: f65689f7-43c5-4f3f-b7a6-a00b0ad3eb56
	I0916 11:06:09.989521   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:09.989525   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:09.989689   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:10.487422   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:10.487449   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:10.487457   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:10.487461   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:10.489902   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:10.489920   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:10.489928   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:10.489936   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:10 GMT
	I0916 11:06:10.489940   36333 round_trippers.go:580]     Audit-Id: e8ae3f12-9374-47bc-af7a-bb0bac3b25f9
	I0916 11:06:10.489944   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:10.489949   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:10.489953   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:10.490170   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:10.490454   36333 node_ready.go:53] node "multinode-736061" has status "Ready":"False"
	I0916 11:06:10.986819   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:10.986852   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:10.986861   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:10.986865   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:10.989440   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:10.989457   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:10.989464   36333 round_trippers.go:580]     Audit-Id: 45e2366e-a502-415d-b492-6bb591954121
	I0916 11:06:10.989468   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:10.989472   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:10.989476   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:10.989480   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:10.989488   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:10 GMT
	I0916 11:06:10.990172   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"344","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0916 11:06:11.486907   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:11.486941   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.486955   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.486964   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.490017   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:11.490035   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.490041   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.490046   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.490049   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.490051   36333 round_trippers.go:580]     Audit-Id: 62ab62ba-1739-48b3-bcf8-48caad8af385
	I0916 11:06:11.490055   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.490058   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.490755   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:11.491049   36333 node_ready.go:49] node "multinode-736061" has status "Ready":"True"
	I0916 11:06:11.491063   36333 node_ready.go:38] duration metric: took 12.004548904s for node "multinode-736061" to be "Ready" ...
	I0916 11:06:11.491072   36333 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:06:11.491138   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:11.491147   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.491154   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.491158   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.493315   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:11.493335   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.493342   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.493346   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.493349   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.493353   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.493357   36333 round_trippers.go:580]     Audit-Id: bd30fb1e-42bd-4dda-accc-b9da7a7ad04b
	I0916 11:06:11.493360   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.493979   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57501 chars]
	I0916 11:06:11.497065   36333 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:11.497169   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:11.497181   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.497191   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.497198   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.499057   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:11.499070   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.499078   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.499085   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.499091   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.499098   36333 round_trippers.go:580]     Audit-Id: fbfd6109-2244-44f1-9709-4db413215efa
	I0916 11:06:11.499104   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.499111   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.499203   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0916 11:06:11.499591   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:11.499615   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.499625   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.499629   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:11.501696   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:11.501710   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:11.501715   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:11.501718   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:11.501722   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:11.501724   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:11.501727   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:11.501729   36333 round_trippers.go:580]     Audit-Id: b7f7678f-fb03-4490-a23f-41da8d8fe3fd
	I0916 11:06:11.501962   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:11.997306   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:11.997333   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:11.997345   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:11.997354   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.001267   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:12.001290   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.001299   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:11 GMT
	I0916 11:06:12.001304   36333 round_trippers.go:580]     Audit-Id: 049ef24a-21b9-42dd-9fee-1ec9b1c03c77
	I0916 11:06:12.001309   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.001313   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.001317   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.001327   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.001498   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0916 11:06:12.002094   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:12.002112   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.002120   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.002124   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.007438   36333 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 11:06:12.007461   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.007468   36333 round_trippers.go:580]     Audit-Id: fb6acf8e-2372-4779-a174-75af486fc8ae
	I0916 11:06:12.007472   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.007480   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.007485   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.007488   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.007492   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.007582   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:12.498214   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:12.498244   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.498257   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.498261   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.500943   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:12.500961   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.500968   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.500971   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.500975   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.500977   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.500980   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.500983   36333 round_trippers.go:580]     Audit-Id: 3716abf9-4d97-44ee-b835-716d148db32d
	I0916 11:06:12.501217   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"420","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0916 11:06:12.501785   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:12.501802   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.501813   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.501818   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.503724   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:12.503738   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.503744   36333 round_trippers.go:580]     Audit-Id: 05cb1d94-303d-4e78-b840-d91db92bdbdb
	I0916 11:06:12.503748   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.503750   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.503753   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.503757   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.503760   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.504032   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:12.997667   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:06:12.997689   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:12.997699   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:12.997702   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:12.999433   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:12.999449   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:12.999455   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:12.999459   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:12.999462   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:12.999465   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:12.999469   36333 round_trippers.go:580]     Audit-Id: bae48126-c167-4933-b736-5a674299dc82
	I0916 11:06:12.999471   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:12.999842   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6776 chars]
	I0916 11:06:13.000296   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.000310   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.000317   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.000321   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.002289   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.002305   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.002311   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.002316   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.002322   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.002328   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:13.002333   36333 round_trippers.go:580]     Audit-Id: ec4a0dd6-b0a0-4b65-aa99-ccddecb9886d
	I0916 11:06:13.002337   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.002439   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.002722   36333 pod_ready.go:93] pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.002736   36333 pod_ready.go:82] duration metric: took 1.505648574s for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.002744   36333 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.002789   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-736061
	I0916 11:06:13.002797   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.002803   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.002806   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.004330   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.004344   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.004360   36333 round_trippers.go:580]     Audit-Id: 9c977c1a-2111-4f2d-b73b-88dc2584b240
	I0916 11:06:13.004367   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.004370   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.004374   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.004378   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.004382   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:12 GMT
	I0916 11:06:13.004780   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-736061","namespace":"kube-system","uid":"f946773c-a82f-4e7e-8148-a81b41b27fa9","resourceVersion":"411","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.32:2379","kubernetes.io/config.hash":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.mirror":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.seen":"2024-09-16T11:05:53.622995492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6418 chars]
	I0916 11:06:13.005178   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.005191   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.005198   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.005203   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.006652   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.006661   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.006667   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.006670   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.006673   36333 round_trippers.go:580]     Audit-Id: 5964a6f7-0e23-4a78-ab26-740b9efba3f0
	I0916 11:06:13.006676   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.006679   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.006681   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.007099   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.007382   36333 pod_ready.go:93] pod "etcd-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.007398   36333 pod_ready.go:82] duration metric: took 4.649318ms for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.007409   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.007451   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-736061
	I0916 11:06:13.007458   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.007465   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.007469   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.009054   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.009069   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.009077   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.009084   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.009087   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.009093   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.009104   36333 round_trippers.go:580]     Audit-Id: 17f2d678-e228-472c-adfb-1a1d6ff375ff
	I0916 11:06:13.009108   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.009327   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-736061","namespace":"kube-system","uid":"bb6b837b-db0a-455d-8055-ec513f470220","resourceVersion":"408","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.32:8443","kubernetes.io/config.hash":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.mirror":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.seen":"2024-09-16T11:05:53.622989337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7637 chars]
	I0916 11:06:13.009756   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.009772   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.009779   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.009782   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.011049   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.011060   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.011066   36333 round_trippers.go:580]     Audit-Id: 92ce6c5b-7c7f-4792-be66-2f0cfa85c88d
	I0916 11:06:13.011070   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.011073   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.011075   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.011077   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.011080   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.011229   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.011534   36333 pod_ready.go:93] pod "kube-apiserver-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.011547   36333 pod_ready.go:82] duration metric: took 4.132838ms for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.011555   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.011607   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-736061
	I0916 11:06:13.011616   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.011622   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.011626   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.012998   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.013015   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.013024   36333 round_trippers.go:580]     Audit-Id: a4ba53c2-6b2a-4c94-af95-040a6fb841fa
	I0916 11:06:13.013031   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.013035   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.013039   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.013043   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.013046   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.013346   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-736061","namespace":"kube-system","uid":"53bb4e69-605c-4160-bf0a-f26e83e16ab1","resourceVersion":"412","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.mirror":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.seen":"2024-09-16T11:05:53.622993259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7198 chars]
	I0916 11:06:13.013794   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.013810   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.013820   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.013826   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.015589   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.015604   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.015613   36333 round_trippers.go:580]     Audit-Id: 11b44286-7359-4aa9-86a4-95c383baef42
	I0916 11:06:13.015618   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.015622   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.015634   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.015641   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.015647   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.016085   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.016377   36333 pod_ready.go:93] pod "kube-controller-manager-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.016393   36333 pod_ready.go:82] duration metric: took 4.831092ms for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.016405   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.016457   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftj9p
	I0916 11:06:13.016465   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.016474   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.016482   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.017876   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:13.017892   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.017900   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.017904   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.017911   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.017916   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.017923   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.017929   36333 round_trippers.go:580]     Audit-Id: 6d42ae98-5d4f-4e69-b809-f90328681ea8
	I0916 11:06:13.018065   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ftj9p","generateName":"kube-proxy-","namespace":"kube-system","uid":"fa72720f-1c4a-46a2-a733-f411ccb6f628","resourceVersion":"398","creationTimestamp":"2024-09-16T11:05:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"562d5386-4fc3-48d5-983a-19cdfbbddc77","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562d5386-4fc3-48d5-983a-19cdfbbddc77\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6141 chars]
	I0916 11:06:13.087776   36333 request.go:632] Waited for 69.276696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.087866   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.087871   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.087878   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.087881   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.090335   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.090354   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.090360   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.090365   36333 round_trippers.go:580]     Audit-Id: 5cc20204-b636-4a60-9bd6-04d8b3098a2e
	I0916 11:06:13.090369   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.090375   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.090380   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.090386   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.090571   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.090955   36333 pod_ready.go:93] pod "kube-proxy-ftj9p" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.090975   36333 pod_ready.go:82] duration metric: took 74.562561ms for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.090984   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.287430   36333 request.go:632] Waited for 196.359065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:06:13.287488   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:06:13.287493   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.287501   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.287505   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.289939   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.289961   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.289971   36333 round_trippers.go:580]     Audit-Id: 66efc13e-c84c-41a4-8eab-cbe270f52f0e
	I0916 11:06:13.289977   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.289981   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.289985   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.289990   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.289994   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.290318   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-736061","namespace":"kube-system","uid":"25a9a3ee-f264-4bd2-95fc-c8452bedc92b","resourceVersion":"413","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.mirror":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.seen":"2024-09-16T11:05:47.723827022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4937 chars]
	I0916 11:06:13.486996   36333 request.go:632] Waited for 196.307844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.487064   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:06:13.487070   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.487092   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.487097   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.489715   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.489738   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.489747   36333 round_trippers.go:580]     Audit-Id: 336fca95-58b8-4e6c-b84b-042526fc9fbe
	I0916 11:06:13.489752   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.489757   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.489764   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.489768   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.489772   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.490442   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:06:13.490831   36333 pod_ready.go:93] pod "kube-scheduler-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:06:13.490850   36333 pod_ready.go:82] duration metric: took 399.858732ms for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:06:13.490860   36333 pod_ready.go:39] duration metric: took 1.999774525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:06:13.490882   36333 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:06:13.490931   36333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:06:13.505992   36333 command_runner.go:130] > 1055
	I0916 11:06:13.506064   36333 api_server.go:72] duration metric: took 14.982447147s to wait for apiserver process to appear ...
	I0916 11:06:13.506079   36333 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:06:13.506096   36333 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0916 11:06:13.510743   36333 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0916 11:06:13.510820   36333 round_trippers.go:463] GET https://192.168.39.32:8443/version
	I0916 11:06:13.510832   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.510842   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.510846   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.511687   36333 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 11:06:13.511703   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.511710   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.511714   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.511717   36333 round_trippers.go:580]     Content-Length: 263
	I0916 11:06:13.511721   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.511724   36333 round_trippers.go:580]     Audit-Id: dddcaee8-dc5a-43b4-bbb7-4446c3ea6dd4
	I0916 11:06:13.511726   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.511729   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.511761   36333 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 11:06:13.511845   36333 api_server.go:141] control plane version: v1.31.1
	I0916 11:06:13.511863   36333 api_server.go:131] duration metric: took 5.778245ms to wait for apiserver health ...
	I0916 11:06:13.511870   36333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:06:13.687271   36333 request.go:632] Waited for 175.343496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:13.687351   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:13.687359   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.687369   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.687378   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.691059   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:13.691080   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.691088   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.691094   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.691099   36333 round_trippers.go:580]     Audit-Id: d5f1331e-ee18-4472-abaf-4ce39ab3590e
	I0916 11:06:13.691104   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.691108   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.691113   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.692310   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57491 chars]
	I0916 11:06:13.694007   36333 system_pods.go:59] 8 kube-system pods found
	I0916 11:06:13.694031   36333 system_pods.go:61] "coredns-7c65d6cfc9-nlhl2" [6ea84b9d-f364-4e26-8dc8-44c3b4d92417] Running
	I0916 11:06:13.694036   36333 system_pods.go:61] "etcd-multinode-736061" [f946773c-a82f-4e7e-8148-a81b41b27fa9] Running
	I0916 11:06:13.694040   36333 system_pods.go:61] "kindnet-qb4tq" [933f0749-7868-4e96-9b8e-67005545bbc5] Running
	I0916 11:06:13.694043   36333 system_pods.go:61] "kube-apiserver-multinode-736061" [bb6b837b-db0a-455d-8055-ec513f470220] Running
	I0916 11:06:13.694048   36333 system_pods.go:61] "kube-controller-manager-multinode-736061" [53bb4e69-605c-4160-bf0a-f26e83e16ab1] Running
	I0916 11:06:13.694051   36333 system_pods.go:61] "kube-proxy-ftj9p" [fa72720f-1c4a-46a2-a733-f411ccb6f628] Running
	I0916 11:06:13.694054   36333 system_pods.go:61] "kube-scheduler-multinode-736061" [25a9a3ee-f264-4bd2-95fc-c8452bedc92b] Running
	I0916 11:06:13.694057   36333 system_pods.go:61] "storage-provisioner" [5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534] Running
	I0916 11:06:13.694062   36333 system_pods.go:74] duration metric: took 182.187944ms to wait for pod list to return data ...
	I0916 11:06:13.694070   36333 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:06:13.887530   36333 request.go:632] Waited for 193.387272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/default/serviceaccounts
	I0916 11:06:13.887624   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/default/serviceaccounts
	I0916 11:06:13.887631   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:13.887642   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:13.887650   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:13.890587   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:13.890607   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:13.890613   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:13.890617   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:13.890620   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:13.890623   36333 round_trippers.go:580]     Content-Length: 261
	I0916 11:06:13.890626   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:13 GMT
	I0916 11:06:13.890629   36333 round_trippers.go:580]     Audit-Id: dc704cc4-052e-4cd8-9722-add36ef0ebcf
	I0916 11:06:13.890632   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:13.890649   36333 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a7fd93be-2448-40ec-9a95-a7af11f4c24b","resourceVersion":"329","creationTimestamp":"2024-09-16T11:05:58Z"}}]}
	I0916 11:06:13.890920   36333 default_sa.go:45] found service account: "default"
	I0916 11:06:13.890938   36333 default_sa.go:55] duration metric: took 196.864556ms for default service account to be created ...
	I0916 11:06:13.890947   36333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:06:14.087112   36333 request.go:632] Waited for 196.092263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:14.087172   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:06:14.087178   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:14.087186   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:14.087190   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:14.090366   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:14.090395   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:14.090405   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:14 GMT
	I0916 11:06:14.090412   36333 round_trippers.go:580]     Audit-Id: ee138c8d-af1f-4dd1-88d7-5905f846a48a
	I0916 11:06:14.090418   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:14.090423   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:14.090432   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:14.090438   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:14.091044   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57491 chars]
	I0916 11:06:14.092723   36333 system_pods.go:86] 8 kube-system pods found
	I0916 11:06:14.092742   36333 system_pods.go:89] "coredns-7c65d6cfc9-nlhl2" [6ea84b9d-f364-4e26-8dc8-44c3b4d92417] Running
	I0916 11:06:14.092747   36333 system_pods.go:89] "etcd-multinode-736061" [f946773c-a82f-4e7e-8148-a81b41b27fa9] Running
	I0916 11:06:14.092751   36333 system_pods.go:89] "kindnet-qb4tq" [933f0749-7868-4e96-9b8e-67005545bbc5] Running
	I0916 11:06:14.092754   36333 system_pods.go:89] "kube-apiserver-multinode-736061" [bb6b837b-db0a-455d-8055-ec513f470220] Running
	I0916 11:06:14.092760   36333 system_pods.go:89] "kube-controller-manager-multinode-736061" [53bb4e69-605c-4160-bf0a-f26e83e16ab1] Running
	I0916 11:06:14.092764   36333 system_pods.go:89] "kube-proxy-ftj9p" [fa72720f-1c4a-46a2-a733-f411ccb6f628] Running
	I0916 11:06:14.092772   36333 system_pods.go:89] "kube-scheduler-multinode-736061" [25a9a3ee-f264-4bd2-95fc-c8452bedc92b] Running
	I0916 11:06:14.092776   36333 system_pods.go:89] "storage-provisioner" [5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534] Running
	I0916 11:06:14.092782   36333 system_pods.go:126] duration metric: took 201.830102ms to wait for k8s-apps to be running ...
	I0916 11:06:14.092791   36333 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:06:14.092830   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:06:14.108124   36333 system_svc.go:56] duration metric: took 15.325ms WaitForService to wait for kubelet
	I0916 11:06:14.108152   36333 kubeadm.go:582] duration metric: took 15.5845367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:06:14.108173   36333 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:06:14.287839   36333 request.go:632] Waited for 179.59535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes
	I0916 11:06:14.287910   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes
	I0916 11:06:14.287923   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:14.287931   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:14.287936   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:14.290746   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:14.290764   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:14.290770   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:14.290774   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:14.290778   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:14.290781   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:14.290783   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:14 GMT
	I0916 11:06:14.290785   36333 round_trippers.go:580]     Audit-Id: c51e4b4c-4d6b-4976-bbaa-01dc82a04c9d
	I0916 11:06:14.290954   36333 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0916 11:06:14.291328   36333 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:06:14.291349   36333 node_conditions.go:123] node cpu capacity is 2
	I0916 11:06:14.291361   36333 node_conditions.go:105] duration metric: took 183.182804ms to run NodePressure ...
	I0916 11:06:14.291378   36333 start.go:241] waiting for startup goroutines ...
	I0916 11:06:14.291388   36333 start.go:246] waiting for cluster config update ...
	I0916 11:06:14.291397   36333 start.go:255] writing updated cluster config ...
	I0916 11:06:14.293212   36333 out.go:201] 
	I0916 11:06:14.294449   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:14.294514   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:06:14.295858   36333 out.go:177] * Starting "multinode-736061-m02" worker node in "multinode-736061" cluster
	I0916 11:06:14.296957   36333 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:06:14.296978   36333 cache.go:56] Caching tarball of preloaded images
	I0916 11:06:14.297080   36333 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:06:14.297090   36333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:06:14.297169   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:06:14.297332   36333 start.go:360] acquireMachinesLock for multinode-736061-m02: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:06:14.297376   36333 start.go:364] duration metric: took 26.88µs to acquireMachinesLock for "multinode-736061-m02"
	I0916 11:06:14.297392   36333 start.go:93] Provisioning new machine with config: &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 11:06:14.297453   36333 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 11:06:14.299005   36333 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 11:06:14.299098   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:14.299139   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:14.313697   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I0916 11:06:14.314138   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:14.314631   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:14.314652   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:14.314925   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:14.315113   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:14.315246   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:14.315398   36333 start.go:159] libmachine.API.Create for "multinode-736061" (driver="kvm2")
	I0916 11:06:14.315428   36333 client.go:168] LocalClient.Create starting
	I0916 11:06:14.315458   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 11:06:14.315488   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:06:14.315501   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:06:14.315551   36333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 11:06:14.315571   36333 main.go:141] libmachine: Decoding PEM data...
	I0916 11:06:14.315581   36333 main.go:141] libmachine: Parsing certificate...
	I0916 11:06:14.315594   36333 main.go:141] libmachine: Running pre-create checks...
	I0916 11:06:14.315601   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .PreCreateCheck
	I0916 11:06:14.315736   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetConfigRaw
	I0916 11:06:14.316069   36333 main.go:141] libmachine: Creating machine...
	I0916 11:06:14.316081   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .Create
	I0916 11:06:14.316204   36333 main.go:141] libmachine: (multinode-736061-m02) Creating KVM machine...
	I0916 11:06:14.317493   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found existing default KVM network
	I0916 11:06:14.317650   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found existing private KVM network mk-multinode-736061
	I0916 11:06:14.317799   36333 main.go:141] libmachine: (multinode-736061-m02) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02 ...
	I0916 11:06:14.317817   36333 main.go:141] libmachine: (multinode-736061-m02) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 11:06:14.317887   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.317799   36743 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:06:14.317991   36333 main.go:141] libmachine: (multinode-736061-m02) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 11:06:14.549863   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.549740   36743 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa...
	I0916 11:06:14.787226   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.787096   36743 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/multinode-736061-m02.rawdisk...
	I0916 11:06:14.787254   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Writing magic tar header
	I0916 11:06:14.787268   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Writing SSH key tar header
	I0916 11:06:14.787278   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:14.787200   36743 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02 ...
	I0916 11:06:14.787317   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02
	I0916 11:06:14.787336   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02 (perms=drwx------)
	I0916 11:06:14.787363   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 11:06:14.787378   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:06:14.787390   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 11:06:14.787401   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 11:06:14.787414   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 11:06:14.787431   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 11:06:14.787452   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 11:06:14.787470   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 11:06:14.787482   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Checking permissions on dir: /home
	I0916 11:06:14.787489   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 11:06:14.787495   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Skipping /home - not owner
	I0916 11:06:14.787501   36333 main.go:141] libmachine: (multinode-736061-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 11:06:14.787509   36333 main.go:141] libmachine: (multinode-736061-m02) Creating domain...
	I0916 11:06:14.788405   36333 main.go:141] libmachine: (multinode-736061-m02) define libvirt domain using xml: 
	I0916 11:06:14.788426   36333 main.go:141] libmachine: (multinode-736061-m02) <domain type='kvm'>
	I0916 11:06:14.788434   36333 main.go:141] libmachine: (multinode-736061-m02)   <name>multinode-736061-m02</name>
	I0916 11:06:14.788439   36333 main.go:141] libmachine: (multinode-736061-m02)   <memory unit='MiB'>2200</memory>
	I0916 11:06:14.788449   36333 main.go:141] libmachine: (multinode-736061-m02)   <vcpu>2</vcpu>
	I0916 11:06:14.788454   36333 main.go:141] libmachine: (multinode-736061-m02)   <features>
	I0916 11:06:14.788459   36333 main.go:141] libmachine: (multinode-736061-m02)     <acpi/>
	I0916 11:06:14.788463   36333 main.go:141] libmachine: (multinode-736061-m02)     <apic/>
	I0916 11:06:14.788468   36333 main.go:141] libmachine: (multinode-736061-m02)     <pae/>
	I0916 11:06:14.788476   36333 main.go:141] libmachine: (multinode-736061-m02)     
	I0916 11:06:14.788481   36333 main.go:141] libmachine: (multinode-736061-m02)   </features>
	I0916 11:06:14.788488   36333 main.go:141] libmachine: (multinode-736061-m02)   <cpu mode='host-passthrough'>
	I0916 11:06:14.788492   36333 main.go:141] libmachine: (multinode-736061-m02)   
	I0916 11:06:14.788496   36333 main.go:141] libmachine: (multinode-736061-m02)   </cpu>
	I0916 11:06:14.788501   36333 main.go:141] libmachine: (multinode-736061-m02)   <os>
	I0916 11:06:14.788507   36333 main.go:141] libmachine: (multinode-736061-m02)     <type>hvm</type>
	I0916 11:06:14.788513   36333 main.go:141] libmachine: (multinode-736061-m02)     <boot dev='cdrom'/>
	I0916 11:06:14.788523   36333 main.go:141] libmachine: (multinode-736061-m02)     <boot dev='hd'/>
	I0916 11:06:14.788529   36333 main.go:141] libmachine: (multinode-736061-m02)     <bootmenu enable='no'/>
	I0916 11:06:14.788533   36333 main.go:141] libmachine: (multinode-736061-m02)   </os>
	I0916 11:06:14.788538   36333 main.go:141] libmachine: (multinode-736061-m02)   <devices>
	I0916 11:06:14.788542   36333 main.go:141] libmachine: (multinode-736061-m02)     <disk type='file' device='cdrom'>
	I0916 11:06:14.788550   36333 main.go:141] libmachine: (multinode-736061-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/boot2docker.iso'/>
	I0916 11:06:14.788556   36333 main.go:141] libmachine: (multinode-736061-m02)       <target dev='hdc' bus='scsi'/>
	I0916 11:06:14.788561   36333 main.go:141] libmachine: (multinode-736061-m02)       <readonly/>
	I0916 11:06:14.788566   36333 main.go:141] libmachine: (multinode-736061-m02)     </disk>
	I0916 11:06:14.788573   36333 main.go:141] libmachine: (multinode-736061-m02)     <disk type='file' device='disk'>
	I0916 11:06:14.788583   36333 main.go:141] libmachine: (multinode-736061-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 11:06:14.788596   36333 main.go:141] libmachine: (multinode-736061-m02)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/multinode-736061-m02.rawdisk'/>
	I0916 11:06:14.788607   36333 main.go:141] libmachine: (multinode-736061-m02)       <target dev='hda' bus='virtio'/>
	I0916 11:06:14.788613   36333 main.go:141] libmachine: (multinode-736061-m02)     </disk>
	I0916 11:06:14.788618   36333 main.go:141] libmachine: (multinode-736061-m02)     <interface type='network'>
	I0916 11:06:14.788624   36333 main.go:141] libmachine: (multinode-736061-m02)       <source network='mk-multinode-736061'/>
	I0916 11:06:14.788629   36333 main.go:141] libmachine: (multinode-736061-m02)       <model type='virtio'/>
	I0916 11:06:14.788634   36333 main.go:141] libmachine: (multinode-736061-m02)     </interface>
	I0916 11:06:14.788638   36333 main.go:141] libmachine: (multinode-736061-m02)     <interface type='network'>
	I0916 11:06:14.788644   36333 main.go:141] libmachine: (multinode-736061-m02)       <source network='default'/>
	I0916 11:06:14.788659   36333 main.go:141] libmachine: (multinode-736061-m02)       <model type='virtio'/>
	I0916 11:06:14.788666   36333 main.go:141] libmachine: (multinode-736061-m02)     </interface>
	I0916 11:06:14.788671   36333 main.go:141] libmachine: (multinode-736061-m02)     <serial type='pty'>
	I0916 11:06:14.788678   36333 main.go:141] libmachine: (multinode-736061-m02)       <target port='0'/>
	I0916 11:06:14.788685   36333 main.go:141] libmachine: (multinode-736061-m02)     </serial>
	I0916 11:06:14.788701   36333 main.go:141] libmachine: (multinode-736061-m02)     <console type='pty'>
	I0916 11:06:14.788717   36333 main.go:141] libmachine: (multinode-736061-m02)       <target type='serial' port='0'/>
	I0916 11:06:14.788756   36333 main.go:141] libmachine: (multinode-736061-m02)     </console>
	I0916 11:06:14.788776   36333 main.go:141] libmachine: (multinode-736061-m02)     <rng model='virtio'>
	I0916 11:06:14.788788   36333 main.go:141] libmachine: (multinode-736061-m02)       <backend model='random'>/dev/random</backend>
	I0916 11:06:14.788799   36333 main.go:141] libmachine: (multinode-736061-m02)     </rng>
	I0916 11:06:14.788810   36333 main.go:141] libmachine: (multinode-736061-m02)     
	I0916 11:06:14.788819   36333 main.go:141] libmachine: (multinode-736061-m02)     
	I0916 11:06:14.788829   36333 main.go:141] libmachine: (multinode-736061-m02)   </devices>
	I0916 11:06:14.788839   36333 main.go:141] libmachine: (multinode-736061-m02) </domain>
	I0916 11:06:14.788858   36333 main.go:141] libmachine: (multinode-736061-m02) 
	I0916 11:06:14.795470   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:f7:d3:a0 in network default
	I0916 11:06:14.796000   36333 main.go:141] libmachine: (multinode-736061-m02) Ensuring networks are active...
	I0916 11:06:14.796022   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:14.796683   36333 main.go:141] libmachine: (multinode-736061-m02) Ensuring network default is active
	I0916 11:06:14.796930   36333 main.go:141] libmachine: (multinode-736061-m02) Ensuring network mk-multinode-736061 is active
	I0916 11:06:14.797372   36333 main.go:141] libmachine: (multinode-736061-m02) Getting domain xml...
	I0916 11:06:14.798084   36333 main.go:141] libmachine: (multinode-736061-m02) Creating domain...
	I0916 11:06:15.994264   36333 main.go:141] libmachine: (multinode-736061-m02) Waiting to get IP...
	I0916 11:06:15.995084   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:15.995470   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:15.995503   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:15.995466   36743 retry.go:31] will retry after 256.165137ms: waiting for machine to come up
	I0916 11:06:16.252819   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:16.253216   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:16.253247   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:16.253174   36743 retry.go:31] will retry after 256.581641ms: waiting for machine to come up
	I0916 11:06:16.511597   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:16.512046   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:16.512078   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:16.511989   36743 retry.go:31] will retry after 470.100013ms: waiting for machine to come up
	I0916 11:06:16.983320   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:16.983794   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:16.983822   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:16.983738   36743 retry.go:31] will retry after 481.533252ms: waiting for machine to come up
	I0916 11:06:17.466315   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:17.466714   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:17.466739   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:17.466674   36743 retry.go:31] will retry after 526.97274ms: waiting for machine to come up
	I0916 11:06:17.995390   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:17.995770   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:17.995797   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:17.995725   36743 retry.go:31] will retry after 715.156872ms: waiting for machine to come up
	I0916 11:06:18.712619   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:18.712975   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:18.713005   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:18.712955   36743 retry.go:31] will retry after 1.04953302s: waiting for machine to come up
	I0916 11:06:19.764242   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:19.764720   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:19.764746   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:19.764678   36743 retry.go:31] will retry after 1.464498529s: waiting for machine to come up
	I0916 11:06:21.231491   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:21.231895   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:21.231924   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:21.231846   36743 retry.go:31] will retry after 1.276932559s: waiting for machine to come up
	I0916 11:06:22.510085   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:22.510462   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:22.510492   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:22.510406   36743 retry.go:31] will retry after 2.116322467s: waiting for machine to come up
	I0916 11:06:24.628072   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:24.628517   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:24.628565   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:24.628488   36743 retry.go:31] will retry after 1.82576742s: waiting for machine to come up
	I0916 11:06:26.456449   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:26.456879   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:26.456902   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:26.456832   36743 retry.go:31] will retry after 3.525211369s: waiting for machine to come up
	I0916 11:06:29.983080   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:29.983452   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find current IP address of domain multinode-736061-m02 in network mk-multinode-736061
	I0916 11:06:29.983481   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | I0916 11:06:29.983403   36743 retry.go:31] will retry after 4.1489865s: waiting for machine to come up
	I0916 11:06:34.136632   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:34.137015   36333 main.go:141] libmachine: (multinode-736061-m02) Found IP for machine: 192.168.39.215
	I0916 11:06:34.137038   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has current primary IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:34.137046   36333 main.go:141] libmachine: (multinode-736061-m02) Reserving static IP address...
	I0916 11:06:34.137377   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find host DHCP lease matching {name: "multinode-736061-m02", mac: "52:54:00:ab:7f:3f", ip: "192.168.39.215"} in network mk-multinode-736061
	I0916 11:06:34.212620   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Getting to WaitForSSH function...
	I0916 11:06:34.212664   36333 main.go:141] libmachine: (multinode-736061-m02) Reserved static IP address: 192.168.39.215
	I0916 11:06:34.212684   36333 main.go:141] libmachine: (multinode-736061-m02) Waiting for SSH to be available...
	I0916 11:06:34.215237   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:34.215601   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061
	I0916 11:06:34.215624   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | unable to find defined IP address of network mk-multinode-736061 interface with MAC address 52:54:00:ab:7f:3f
	I0916 11:06:34.215724   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH client type: external
	I0916 11:06:34.215747   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa (-rw-------)
	I0916 11:06:34.215785   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:06:34.215799   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | About to run SSH command:
	I0916 11:06:34.215813   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | exit 0
	I0916 11:06:34.219441   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | SSH cmd err, output: exit status 255: 
	I0916 11:06:34.219459   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0916 11:06:34.219479   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | command : exit 0
	I0916 11:06:34.219486   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | err     : exit status 255
	I0916 11:06:34.219500   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | output  : 
	I0916 11:06:37.221305   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Getting to WaitForSSH function...
	I0916 11:06:37.223785   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.224218   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.224249   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.224389   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH client type: external
	I0916 11:06:37.224425   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa (-rw-------)
	I0916 11:06:37.224453   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:06:37.224466   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | About to run SSH command:
	I0916 11:06:37.224478   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | exit 0
	I0916 11:06:37.353335   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 11:06:37.353612   36333 main.go:141] libmachine: (multinode-736061-m02) KVM machine creation complete!
	I0916 11:06:37.353916   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetConfigRaw
	I0916 11:06:37.354454   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:37.354670   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:37.354813   36333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 11:06:37.354837   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetState
	I0916 11:06:37.356155   36333 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 11:06:37.356168   36333 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 11:06:37.356173   36333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 11:06:37.356178   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.358470   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.358821   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.358851   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.359033   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.359202   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.359379   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.359518   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.359712   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.359921   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.359934   36333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 11:06:37.472504   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:06:37.472530   36333 main.go:141] libmachine: Detecting the provisioner...
	I0916 11:06:37.472541   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.475233   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.475607   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.475636   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.475857   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.476043   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.476177   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.476273   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.476421   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.476603   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.476615   36333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 11:06:37.589999   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 11:06:37.590067   36333 main.go:141] libmachine: found compatible host: buildroot
	I0916 11:06:37.590080   36333 main.go:141] libmachine: Provisioning with buildroot...
	I0916 11:06:37.590090   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:37.590330   36333 buildroot.go:166] provisioning hostname "multinode-736061-m02"
	I0916 11:06:37.590353   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:37.590535   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.593099   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.593511   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.593545   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.593707   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.593913   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.594073   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.594252   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.594426   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.594610   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.594626   36333 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061-m02 && echo "multinode-736061-m02" | sudo tee /etc/hostname
	I0916 11:06:37.725054   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061-m02
	
	I0916 11:06:37.725083   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.727908   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.728266   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.728290   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.728459   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.728603   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.728791   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.728929   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.729108   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:37.729301   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:37.729318   36333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:06:37.850812   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:06:37.850838   36333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:06:37.850861   36333 buildroot.go:174] setting up certificates
	I0916 11:06:37.850873   36333 provision.go:84] configureAuth start
	I0916 11:06:37.850887   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetMachineName
	I0916 11:06:37.851152   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:37.853960   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.854316   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.854352   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.854551   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.857790   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.858201   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.858229   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.858390   36333 provision.go:143] copyHostCerts
	I0916 11:06:37.858422   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:06:37.858461   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:06:37.858470   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:06:37.858532   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:06:37.858604   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:06:37.858621   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:06:37.858634   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:06:37.858659   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:06:37.858701   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:06:37.858718   36333 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:06:37.858724   36333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:06:37.858743   36333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:06:37.858790   36333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061-m02 san=[127.0.0.1 192.168.39.215 localhost minikube multinode-736061-m02]
	I0916 11:06:37.923156   36333 provision.go:177] copyRemoteCerts
	I0916 11:06:37.923208   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:06:37.923231   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:37.925836   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.926258   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:37.926290   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:37.926437   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:37.926626   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:37.926793   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:37.926926   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.012129   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:06:38.012207   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:06:38.037100   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:06:38.037189   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 11:06:38.061563   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:06:38.061639   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:06:38.086240   36333 provision.go:87] duration metric: took 235.355849ms to configureAuth
	I0916 11:06:38.086275   36333 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:06:38.086480   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:38.086569   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.089063   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.089497   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.089523   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.089726   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.089949   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.090094   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.090233   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.090377   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:38.090580   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:38.090606   36333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:06:38.321227   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:06:38.321256   36333 main.go:141] libmachine: Checking connection to Docker...
	I0916 11:06:38.321267   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetURL
	I0916 11:06:38.322472   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | Using libvirt version 6000000
	I0916 11:06:38.324838   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.325188   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.325217   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.325403   36333 main.go:141] libmachine: Docker is up and running!
	I0916 11:06:38.325423   36333 main.go:141] libmachine: Reticulating splines...
	I0916 11:06:38.325430   36333 client.go:171] duration metric: took 24.009992581s to LocalClient.Create
	I0916 11:06:38.325453   36333 start.go:167] duration metric: took 24.010057312s to libmachine.API.Create "multinode-736061"
	I0916 11:06:38.325463   36333 start.go:293] postStartSetup for "multinode-736061-m02" (driver="kvm2")
	I0916 11:06:38.325472   36333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:06:38.325488   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.325735   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:06:38.325761   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.327885   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.328246   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.328274   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.328401   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.328576   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.328755   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.328893   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.417551   36333 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:06:38.422302   36333 command_runner.go:130] > NAME=Buildroot
	I0916 11:06:38.422325   36333 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:06:38.422330   36333 command_runner.go:130] > ID=buildroot
	I0916 11:06:38.422338   36333 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:06:38.422344   36333 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:06:38.422380   36333 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:06:38.422396   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:06:38.422482   36333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:06:38.422578   36333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:06:38.422590   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:06:38.422721   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:06:38.432790   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:06:38.457473   36333 start.go:296] duration metric: took 131.99444ms for postStartSetup
	I0916 11:06:38.457527   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetConfigRaw
	I0916 11:06:38.458085   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:38.460620   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.461040   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.461064   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.461314   36333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:06:38.461550   36333 start.go:128] duration metric: took 24.164086939s to createHost
	I0916 11:06:38.461575   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.463833   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.464136   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.464164   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.464287   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.464459   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.464618   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.464770   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.464924   36333 main.go:141] libmachine: Using SSH client type: native
	I0916 11:06:38.465074   36333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0916 11:06:38.465083   36333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:06:38.578075   36333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726484798.554835726
	
	I0916 11:06:38.578110   36333 fix.go:216] guest clock: 1726484798.554835726
	I0916 11:06:38.578122   36333 fix.go:229] Guest: 2024-09-16 11:06:38.554835726 +0000 UTC Remote: 2024-09-16 11:06:38.461564512 +0000 UTC m=+84.272513037 (delta=93.271214ms)
	I0916 11:06:38.578147   36333 fix.go:200] guest clock delta is within tolerance: 93.271214ms
	I0916 11:06:38.578155   36333 start.go:83] releasing machines lock for "multinode-736061-m02", held for 24.28076935s
	I0916 11:06:38.578186   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.578431   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:38.580628   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.580912   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.580938   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.583166   36333 out.go:177] * Found network options:
	I0916 11:06:38.584510   36333 out.go:177]   - NO_PROXY=192.168.39.32
	W0916 11:06:38.585730   36333 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 11:06:38.585774   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.586207   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.586373   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:06:38.586488   36333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:06:38.586529   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	W0916 11:06:38.586555   36333 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 11:06:38.586627   36333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:06:38.586659   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:06:38.589111   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589441   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589464   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.589478   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589653   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.589828   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.589920   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:38.589945   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:38.589969   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.590114   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.590137   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:06:38.590297   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:06:38.590453   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:06:38.590573   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:06:38.833457   36333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:06:38.833460   36333 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:06:38.840018   36333 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:06:38.840068   36333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:06:38.840119   36333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:06:38.857271   36333 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0916 11:06:38.857340   36333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 11:06:38.857352   36333 start.go:495] detecting cgroup driver to use...
	I0916 11:06:38.857422   36333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:06:38.874145   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:06:38.889311   36333 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:06:38.889384   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:06:38.904072   36333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:06:38.918465   36333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:06:38.939615   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/cri-docker.socket".
	I0916 11:06:39.039841   36333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:06:39.055232   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 11:06:39.204329   36333 docker.go:233] disabling docker service ...
	I0916 11:06:39.204407   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:06:39.219106   36333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:06:39.231775   36333 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0916 11:06:39.232015   36333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:06:39.246736   36333 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/docker.socket".
	I0916 11:06:39.352695   36333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:06:39.366724   36333 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0916 11:06:39.367009   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 11:06:39.477374   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:06:39.491313   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:06:39.509431   36333 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:06:39.509664   36333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:06:39.509720   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.519949   36333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:06:39.520006   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.530312   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.540682   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.551053   36333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:06:39.561350   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.571523   36333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.588521   36333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:06:39.598451   36333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:06:39.607608   36333 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:06:39.607821   36333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:06:39.607895   36333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 11:06:39.620469   36333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:06:39.630421   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:06:39.757829   36333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:06:39.848762   36333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:06:39.848837   36333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:06:39.853344   36333 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:06:39.853378   36333 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:06:39.853387   36333 command_runner.go:130] > Device: 0,22	Inode: 692         Links: 1
	I0916 11:06:39.853397   36333 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:06:39.853406   36333 command_runner.go:130] > Access: 2024-09-16 11:06:39.819235404 +0000
	I0916 11:06:39.853417   36333 command_runner.go:130] > Modify: 2024-09-16 11:06:39.819235404 +0000
	I0916 11:06:39.853425   36333 command_runner.go:130] > Change: 2024-09-16 11:06:39.819235404 +0000
	I0916 11:06:39.853435   36333 command_runner.go:130] >  Birth: -
	I0916 11:06:39.853468   36333 start.go:563] Will wait 60s for crictl version
	I0916 11:06:39.853509   36333 ssh_runner.go:195] Run: which crictl
	I0916 11:06:39.857444   36333 command_runner.go:130] > /usr/bin/crictl
	I0916 11:06:39.857673   36333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:06:39.894954   36333 command_runner.go:130] > Version:  0.1.0
	I0916 11:06:39.894981   36333 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:06:39.894988   36333 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:06:39.894995   36333 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:06:39.895019   36333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:06:39.895097   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:06:39.923765   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:06:39.923790   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:06:39.923800   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:06:39.923806   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:06:39.923814   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:06:39.923824   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:06:39.923830   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:06:39.923837   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:06:39.923846   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:06:39.923853   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:06:39.923861   36333 command_runner.go:130] > BuildTags:      
	I0916 11:06:39.923871   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:06:39.923885   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:06:39.923920   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:06:39.923931   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:06:39.923939   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:06:39.923945   36333 command_runner.go:130] >   seccomp
	I0916 11:06:39.923955   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:06:39.923962   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:06:39.923969   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:06:39.924983   36333 ssh_runner.go:195] Run: crio --version
	I0916 11:06:39.952254   36333 command_runner.go:130] > crio version 1.29.1
	I0916 11:06:39.952273   36333 command_runner.go:130] > Version:        1.29.1
	I0916 11:06:39.952278   36333 command_runner.go:130] > GitCommit:      unknown
	I0916 11:06:39.952282   36333 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:06:39.952286   36333 command_runner.go:130] > GitTreeState:   clean
	I0916 11:06:39.952292   36333 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:06:39.952296   36333 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:06:39.952299   36333 command_runner.go:130] > Compiler:       gc
	I0916 11:06:39.952303   36333 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:06:39.952307   36333 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:06:39.952312   36333 command_runner.go:130] > BuildTags:      
	I0916 11:06:39.952316   36333 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:06:39.952320   36333 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:06:39.952323   36333 command_runner.go:130] >   btrfs_noversion
	I0916 11:06:39.952328   36333 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:06:39.952332   36333 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:06:39.952335   36333 command_runner.go:130] >   seccomp
	I0916 11:06:39.952340   36333 command_runner.go:130] > LDFlags:          unknown
	I0916 11:06:39.952347   36333 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:06:39.952351   36333 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:06:39.954973   36333 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:06:39.956239   36333 out.go:177]   - env NO_PROXY=192.168.39.32
	I0916 11:06:39.957336   36333 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:06:39.959778   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:39.960172   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:06:39.960201   36333 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:06:39.960447   36333 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:06:39.964564   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:06:39.976775   36333 mustload.go:65] Loading cluster: multinode-736061
	I0916 11:06:39.976995   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:39.977326   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:39.977370   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:39.991897   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0916 11:06:39.992285   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:39.992706   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:39.992727   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:39.993009   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:39.993201   36333 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:06:39.994739   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:06:39.995067   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:39.995107   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:40.009297   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0916 11:06:40.009718   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:40.010162   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:40.010181   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:40.010475   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:40.010666   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:06:40.010796   36333 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.215
	I0916 11:06:40.010808   36333 certs.go:194] generating shared ca certs ...
	I0916 11:06:40.010827   36333 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:06:40.010960   36333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:06:40.011012   36333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:06:40.011029   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:06:40.011051   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:06:40.011069   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:06:40.011088   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:06:40.011150   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:06:40.011188   36333 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:06:40.011201   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:06:40.011234   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:06:40.011266   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:06:40.011300   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:06:40.011355   36333 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:06:40.011395   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.011414   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.011433   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.011460   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:06:40.036948   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:06:40.064224   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:06:40.087718   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:06:40.112736   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:06:40.136429   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:06:40.160538   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:06:40.184215   36333 ssh_runner.go:195] Run: openssl version
	I0916 11:06:40.190212   36333 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:06:40.190294   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:06:40.201031   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.205421   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.205541   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.205595   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:06:40.211146   36333 command_runner.go:130] > b5213941
	I0916 11:06:40.211346   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:06:40.222442   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:06:40.233468   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.237653   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.237872   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.237943   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:06:40.243562   36333 command_runner.go:130] > 51391683
	I0916 11:06:40.243642   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:06:40.254028   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:06:40.264085   36333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.268310   36333 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.268436   36333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.268485   36333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:06:40.274044   36333 command_runner.go:130] > 3ec20f2e
	I0916 11:06:40.274103   36333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:06:40.284368   36333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:06:40.288308   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:06:40.288452   36333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:06:40.288496   36333 kubeadm.go:934] updating node {m02 192.168.39.215 8443 v1.31.1 crio false true} ...
	I0916 11:06:40.288609   36333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:06:40.288669   36333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:06:40.297456   36333 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	I0916 11:06:40.297575   36333 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:06:40.297646   36333 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:06:40.307147   36333 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 11:06:40.307166   36333 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 11:06:40.307178   36333 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:06:40.307183   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:06:40.307195   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:06:40.307196   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:06:40.307242   36333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:06:40.307255   36333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:06:40.323894   36333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:06:40.323930   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:06:40.323989   36333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:06:40.324000   36333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:06:40.324017   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:06:40.324025   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:06:40.324077   36333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:06:40.324099   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:06:40.351979   36333 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:06:40.359991   36333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:06:40.360045   36333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:06:41.140271   36333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 11:06:41.150182   36333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0916 11:06:41.166961   36333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:06:41.185279   36333 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:06:41.189266   36333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:06:41.202395   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:06:41.334758   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:06:41.353018   36333 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:06:41.353407   36333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:06:41.353465   36333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:06:41.368533   36333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0916 11:06:41.368969   36333 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:06:41.369438   36333 main.go:141] libmachine: Using API Version  1
	I0916 11:06:41.369463   36333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:06:41.369762   36333 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:06:41.369969   36333 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:06:41.370125   36333 start.go:317] joinCluster: &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:06:41.370241   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 11:06:41.370266   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:06:41.373080   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:06:41.373539   36333 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:06:41.373562   36333 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:06:41.373699   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:06:41.373850   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:06:41.373982   36333 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:06:41.374133   36333 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:06:41.524071   36333 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ktop33.r4upqd8kmtc2z9di --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:06:41.524259   36333 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 11:06:41.524306   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktop33.r4upqd8kmtc2z9di --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-736061-m02"
	I0916 11:06:41.573303   36333 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 11:06:41.675528   36333 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0916 11:06:41.675557   36333 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0916 11:06:41.719707   36333 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:06:41.719740   36333 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:06:41.719746   36333 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 11:06:41.857233   36333 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:06:42.358605   36333 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.857577ms
	I0916 11:06:42.358632   36333 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0916 11:06:42.873485   36333 command_runner.go:130] > This node has joined the cluster:
	I0916 11:06:42.873512   36333 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0916 11:06:42.873522   36333 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0916 11:06:42.873530   36333 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0916 11:06:42.875319   36333 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:06:42.875357   36333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktop33.r4upqd8kmtc2z9di --discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=multinode-736061-m02": (1.351026287s)
	I0916 11:06:42.875382   36333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 11:06:43.009989   36333 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0916 11:06:43.134073   36333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-736061-m02 minikube.k8s.io/updated_at=2024_09_16T11_06_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-736061 minikube.k8s.io/primary=false
	I0916 11:06:43.231128   36333 command_runner.go:130] > node/multinode-736061-m02 labeled
	I0916 11:06:43.233155   36333 start.go:319] duration metric: took 1.863029493s to joinCluster
	I0916 11:06:43.233210   36333 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0916 11:06:43.233480   36333 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:06:43.235299   36333 out.go:177] * Verifying Kubernetes components...
	I0916 11:06:43.236419   36333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:06:43.364788   36333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:06:43.380967   36333 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:06:43.381302   36333 kapi.go:59] client config for multinode-736061: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:06:43.381632   36333 node_ready.go:35] waiting up to 6m0s for node "multinode-736061-m02" to be "Ready" ...
	I0916 11:06:43.381707   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:43.381718   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:43.381728   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:43.381734   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:43.383721   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:06:43.383743   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:43.383750   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:43.383754   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:43 GMT
	I0916 11:06:43.383757   36333 round_trippers.go:580]     Audit-Id: a10c208c-b7a1-4fde-8f1f-80e81dbc5bd7
	I0916 11:06:43.383762   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:43.383767   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:43.383773   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:43.383781   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:43.383862   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:43.881816   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:43.881848   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:43.881859   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:43.881864   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:43.884298   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:43.884315   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:43.884321   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:43.884325   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:43.884329   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:43 GMT
	I0916 11:06:43.884333   36333 round_trippers.go:580]     Audit-Id: 1a59a83b-12f1-49a2-b3ee-6f00e880e1a9
	I0916 11:06:43.884336   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:43.884338   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:43.884341   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:43.884470   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:44.382535   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:44.382558   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:44.382566   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:44.382571   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:44.385111   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:44.385148   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:44.385158   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:44.385164   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:44.385169   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:44.385174   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:44.385178   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:44.385183   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:44 GMT
	I0916 11:06:44.385189   36333 round_trippers.go:580]     Audit-Id: 094d8e5e-fdfe-4a4d-95ee-7f0b3e416a1f
	I0916 11:06:44.385282   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:44.882432   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:44.882463   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:44.882474   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:44.882492   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:44.885416   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:44.885446   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:44.885457   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:44 GMT
	I0916 11:06:44.885463   36333 round_trippers.go:580]     Audit-Id: 9f093c50-aaf7-40b0-867f-b2994fa44369
	I0916 11:06:44.885467   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:44.885472   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:44.885476   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:44.885484   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:44.885489   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:44.885588   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:45.382050   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:45.382074   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:45.382083   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:45.382088   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:45.384871   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:45.384897   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:45.384903   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:45 GMT
	I0916 11:06:45.384907   36333 round_trippers.go:580]     Audit-Id: 86e154b2-0210-4ad5-a407-bd78a7bc86cd
	I0916 11:06:45.384910   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:45.384912   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:45.384915   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:45.384918   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:45.384922   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:45.385050   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:45.385320   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:45.882672   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:45.882696   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:45.882703   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:45.882708   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:45.884907   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:45.884933   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:45.884942   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:45.884949   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:45.884953   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:45 GMT
	I0916 11:06:45.884960   36333 round_trippers.go:580]     Audit-Id: d156b24c-8300-48ed-8965-e174644374ed
	I0916 11:06:45.884964   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:45.884969   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:45.884974   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:45.885065   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:46.381849   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:46.381878   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:46.381890   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:46.381899   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:46.384768   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:46.384797   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:46.384808   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:46.384816   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:46.384824   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:46.384830   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:46 GMT
	I0916 11:06:46.384836   36333 round_trippers.go:580]     Audit-Id: 7a9e3625-e165-4256-8512-218a106f5e3a
	I0916 11:06:46.384845   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:46.384851   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:46.384899   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:46.882413   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:46.882440   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:46.882451   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:46.882456   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:46.885312   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:46.885333   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:46.885343   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:46.885349   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:46.885354   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:46.885359   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:46.885363   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:46 GMT
	I0916 11:06:46.885367   36333 round_trippers.go:580]     Audit-Id: 28f7596c-dfd2-4619-899e-d678c084e485
	I0916 11:06:46.885372   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:46.885459   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:47.381854   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:47.381879   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:47.381895   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:47.381904   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:47.384790   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:47.384817   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:47.384826   36333 round_trippers.go:580]     Audit-Id: c345c256-16eb-407d-9254-63e517bdedce
	I0916 11:06:47.384832   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:47.384836   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:47.384840   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:47.384845   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:47.384849   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:47.384854   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:47 GMT
	I0916 11:06:47.384956   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:47.882462   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:47.882485   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:47.882502   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:47.882509   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:47.885713   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:47.885741   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:47.885751   36333 round_trippers.go:580]     Audit-Id: f510e61e-040d-4f8f-b503-56627e582690
	I0916 11:06:47.885758   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:47.885764   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:47.885775   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:47.885782   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:47.885786   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:47.885791   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:47 GMT
	I0916 11:06:47.885886   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:47.886227   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:48.382422   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:48.382444   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:48.382452   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:48.382457   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:48.384713   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:48.384732   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:48.384740   36333 round_trippers.go:580]     Audit-Id: 2afd3533-ebe5-4c1a-b3cb-2ea790e62521
	I0916 11:06:48.384746   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:48.384752   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:48.384757   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:48.384760   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:48.384765   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:48.384770   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:48 GMT
	I0916 11:06:48.384881   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:48.882354   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:48.882373   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:48.882381   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:48.882386   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:48.884914   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:48.884946   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:48.884957   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:48.884962   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:48.884969   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:48.884974   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:48 GMT
	I0916 11:06:48.884979   36333 round_trippers.go:580]     Audit-Id: c43f9f40-ee7e-42ca-a8ae-8022970ad57c
	I0916 11:06:48.884986   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:48.884990   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:48.885089   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:49.382516   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:49.382541   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:49.382550   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:49.382554   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:49.385228   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:49.385247   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:49.385252   36333 round_trippers.go:580]     Audit-Id: 0aa15424-b102-44a3-8b56-d340a4fb6238
	I0916 11:06:49.385256   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:49.385261   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:49.385265   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:49.385269   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:49.385274   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:49.385278   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:49 GMT
	I0916 11:06:49.385364   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:49.881958   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:49.881983   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:49.881991   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:49.881994   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:49.884381   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:49.884404   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:49.884413   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:49.884417   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:49.884421   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:49.884426   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:49.884430   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:49.884434   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:49 GMT
	I0916 11:06:49.884442   36333 round_trippers.go:580]     Audit-Id: 7939f888-97da-4ba4-a037-cfb04412c20c
	I0916 11:06:49.884479   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:50.382331   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:50.382356   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:50.382366   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:50.382370   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:50.384609   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:50.384635   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:50.384642   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:50 GMT
	I0916 11:06:50.384645   36333 round_trippers.go:580]     Audit-Id: 712bef41-6dff-4407-8056-477afe713b8c
	I0916 11:06:50.384648   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:50.384650   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:50.384653   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:50.384657   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:50.384661   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:50.384753   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:50.385064   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:50.882316   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:50.882340   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:50.882350   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:50.882357   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:50.885443   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:50.885462   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:50.885471   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:50.885477   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:50.885481   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:50.885485   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:50 GMT
	I0916 11:06:50.885489   36333 round_trippers.go:580]     Audit-Id: 8fd12666-d052-4829-8511-a6204426d5a4
	I0916 11:06:50.885494   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:50.885498   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:50.885582   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:51.382753   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:51.382778   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:51.382788   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:51.382800   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:51.385381   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:51.385409   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:51.385419   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:51.385424   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:51 GMT
	I0916 11:06:51.385428   36333 round_trippers.go:580]     Audit-Id: 70599105-80ef-4c68-8819-cfb396182ddc
	I0916 11:06:51.385433   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:51.385437   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:51.385441   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:51.385446   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:51.385529   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:51.882064   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:51.882089   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:51.882097   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:51.882102   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:51.884362   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:51.884381   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:51.884390   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:51.884401   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:51.884407   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:51.884412   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:51 GMT
	I0916 11:06:51.884422   36333 round_trippers.go:580]     Audit-Id: cb600bbb-d6ef-4d33-8d75-2e065937d899
	I0916 11:06:51.884427   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:51.884432   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:51.884502   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:52.382075   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:52.382100   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:52.382109   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:52.382115   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:52.384382   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:52.384398   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:52.384405   36333 round_trippers.go:580]     Audit-Id: 020d013d-bcf2-4075-bc7d-696fbc115986
	I0916 11:06:52.384409   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:52.384411   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:52.384414   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:52.384417   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:52.384420   36333 round_trippers.go:580]     Content-Length: 4076
	I0916 11:06:52.384423   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:52 GMT
	I0916 11:06:52.384493   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"492","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3052 chars]
	I0916 11:06:52.882074   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:52.882100   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:52.882107   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:52.882111   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:52.884365   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:52.884382   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:52.884389   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:52.884393   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:52.884396   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:52 GMT
	I0916 11:06:52.884399   36333 round_trippers.go:580]     Audit-Id: 445f526b-a180-4591-8a67-3dd73e0ade74
	I0916 11:06:52.884402   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:52.884405   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:52.885040   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:52.885302   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:53.382865   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:53.382895   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:53.382903   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:53.382908   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:53.385467   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:53.385485   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:53.385491   36333 round_trippers.go:580]     Audit-Id: 159c7c33-38df-42fc-b405-77ea15053fbd
	I0916 11:06:53.385496   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:53.385499   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:53.385502   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:53.385506   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:53.385512   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:53 GMT
	I0916 11:06:53.385913   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:53.882376   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:53.882402   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:53.882410   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:53.882414   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:53.885057   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:53.885079   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:53.885088   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:53.885093   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:53.885099   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:53 GMT
	I0916 11:06:53.885103   36333 round_trippers.go:580]     Audit-Id: 9cc4d135-b9cf-40e3-873d-3a59e3dfb0b4
	I0916 11:06:53.885106   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:53.885110   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:53.885220   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:54.382165   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:54.382187   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:54.382195   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:54.382199   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:54.384622   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:54.384643   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:54.384652   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:54.384656   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:54.384660   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:54.384663   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:54 GMT
	I0916 11:06:54.384667   36333 round_trippers.go:580]     Audit-Id: e6ca69c6-61bc-403a-976f-ab39a0472feb
	I0916 11:06:54.384671   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:54.385072   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:54.882800   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:54.882826   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:54.882833   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:54.882837   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:54.885722   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:54.885743   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:54.885750   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:54.885755   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:54.885758   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:54.885763   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:54 GMT
	I0916 11:06:54.885766   36333 round_trippers.go:580]     Audit-Id: ae5070ec-526e-4a83-8a43-764e2c562a48
	I0916 11:06:54.885769   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:54.886443   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:54.886687   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:55.382026   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:55.382048   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:55.382060   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:55.382066   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:55.384356   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:55.384373   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:55.384380   36333 round_trippers.go:580]     Audit-Id: cf2b89b2-9267-4b01-ac09-56320e98bc39
	I0916 11:06:55.384382   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:55.384385   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:55.384389   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:55.384392   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:55.384395   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:55 GMT
	I0916 11:06:55.384820   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:55.882567   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:55.882598   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:55.882609   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:55.882614   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:55.884990   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:55.885008   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:55.885014   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:55.885020   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:55.885024   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:55 GMT
	I0916 11:06:55.885027   36333 round_trippers.go:580]     Audit-Id: e3c306d0-9a3a-41df-9b9f-5860cf843392
	I0916 11:06:55.885030   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:55.885033   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:55.885494   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:56.382660   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:56.382688   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:56.382699   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:56.382704   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:56.385051   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:56.385068   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:56.385073   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:56.385077   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:56.385080   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:56 GMT
	I0916 11:06:56.385083   36333 round_trippers.go:580]     Audit-Id: 9021ec18-5d40-4f1b-b395-a964b7aea360
	I0916 11:06:56.385085   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:56.385088   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:56.385274   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:56.881866   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:56.881901   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:56.881909   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:56.881913   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:56.884689   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:56.884711   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:56.884720   36333 round_trippers.go:580]     Audit-Id: da8cdbfb-d204-4df9-9862-d23b33825201
	I0916 11:06:56.884728   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:56.884734   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:56.884741   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:56.884743   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:56.884746   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:56 GMT
	I0916 11:06:56.885025   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:57.382757   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:57.382786   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:57.382795   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:57.382800   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:57.385312   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:57.385331   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:57.385338   36333 round_trippers.go:580]     Audit-Id: 66498600-b33f-4331-ad10-139d2901440e
	I0916 11:06:57.385342   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:57.385346   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:57.385348   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:57.385351   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:57.385356   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:57 GMT
	I0916 11:06:57.385882   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:57.386135   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:06:57.882613   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:57.882635   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:57.882643   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:57.882648   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:57.885194   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:57.885220   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:57.885230   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:57.885236   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:57 GMT
	I0916 11:06:57.885241   36333 round_trippers.go:580]     Audit-Id: 4972ebdb-dc0a-4ff9-aa5a-0a15423a4700
	I0916 11:06:57.885245   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:57.885249   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:57.885253   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:57.885499   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:58.381907   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:58.381935   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:58.381945   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:58.381950   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:58.384907   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:58.384928   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:58.384934   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:58 GMT
	I0916 11:06:58.384937   36333 round_trippers.go:580]     Audit-Id: 19f54c6d-f0a0-4989-8399-2a2325100b86
	I0916 11:06:58.384941   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:58.384944   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:58.384948   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:58.384950   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:58.385339   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:58.882365   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:58.882386   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:58.882395   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:58.882401   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:58.884915   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:58.884931   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:58.884936   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:58.884942   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:58.884947   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:58.884961   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:58 GMT
	I0916 11:06:58.884965   36333 round_trippers.go:580]     Audit-Id: 8bcb2b1f-f2f6-40e3-9089-9868e3c135c5
	I0916 11:06:58.884969   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:58.885219   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:59.382036   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:59.382058   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:59.382066   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:59.382069   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:59.384564   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:06:59.384583   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:59.384589   36333 round_trippers.go:580]     Audit-Id: d2abf775-506e-4891-bbef-b131671b3ef7
	I0916 11:06:59.384594   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:59.384598   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:59.384600   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:59.384606   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:59.384609   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:59 GMT
	I0916 11:06:59.384765   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:59.882487   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:06:59.882512   36333 round_trippers.go:469] Request Headers:
	I0916 11:06:59.882520   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:06:59.882525   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:06:59.885557   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:06:59.885578   36333 round_trippers.go:577] Response Headers:
	I0916 11:06:59.885584   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:06:59 GMT
	I0916 11:06:59.885589   36333 round_trippers.go:580]     Audit-Id: b99755b0-880d-44c4-8e71-b9b3f7058ee8
	I0916 11:06:59.885592   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:06:59.885594   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:06:59.885598   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:06:59.885602   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:06:59.885896   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:06:59.886155   36333 node_ready.go:53] node "multinode-736061-m02" has status "Ready":"False"
	I0916 11:07:00.382398   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:00.382422   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:00.382434   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:00.382439   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:00.384903   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:00.384920   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:00.384927   36333 round_trippers.go:580]     Audit-Id: 9c49b042-c266-45ce-82de-a79d303a2328
	I0916 11:07:00.384931   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:00.384934   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:00.384937   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:00.384940   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:00.384943   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:00 GMT
	I0916 11:07:00.385256   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:00.881904   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:00.881932   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:00.881942   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:00.881953   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:00.884941   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:00.884963   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:00.884970   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:00.884973   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:00.884976   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:00 GMT
	I0916 11:07:00.884979   36333 round_trippers.go:580]     Audit-Id: 48df80dc-5927-47e2-bc47-b1c911c89063
	I0916 11:07:00.884983   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:00.884985   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:00.885438   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:01.382265   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:01.382288   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:01.382296   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:01.382299   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:01.384782   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:01.384801   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:01.384808   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:01.384812   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:01.384815   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:01.384817   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:01 GMT
	I0916 11:07:01.384820   36333 round_trippers.go:580]     Audit-Id: 5f99dcd6-ebc5-4cb0-b401-fc505686a655
	I0916 11:07:01.384822   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:01.385004   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:01.882369   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:01.882394   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:01.882402   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:01.882406   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:01.884939   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:01.884962   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:01.884970   36333 round_trippers.go:580]     Audit-Id: 1cfa49ba-8e0f-4597-b973-d2575e71d839
	I0916 11:07:01.884977   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:01.884984   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:01.884988   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:01.884992   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:01.885001   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:01 GMT
	I0916 11:07:01.885169   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"516","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3444 chars]
	I0916 11:07:02.381818   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:02.381845   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.381853   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.381858   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.384139   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.384157   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.384163   36333 round_trippers.go:580]     Audit-Id: 11912aa8-84f7-4ab4-b0ea-de423df6f5ed
	I0916 11:07:02.384166   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.384169   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.384172   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.384174   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.384177   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.384438   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"525","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3261 chars]
	I0916 11:07:02.384724   36333 node_ready.go:49] node "multinode-736061-m02" has status "Ready":"True"
	I0916 11:07:02.384745   36333 node_ready.go:38] duration metric: took 19.003097722s for node "multinode-736061-m02" to be "Ready" ...
	I0916 11:07:02.384757   36333 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:07:02.384835   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods
	I0916 11:07:02.384847   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.384857   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.384860   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.387695   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.387714   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.387723   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.387728   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.387732   36333 round_trippers.go:580]     Audit-Id: db6fdfe5-ace7-4820-9b1f-a954aa0b1dfd
	I0916 11:07:02.387736   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.387740   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.387748   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.388801   36333 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"526"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 72116 chars]
	I0916 11:07:02.390902   36333 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.390994   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nlhl2
	I0916 11:07:02.391003   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.391010   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.391013   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.393147   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.393167   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.393173   36333 round_trippers.go:580]     Audit-Id: 4649f039-f37f-4522-98b6-8b05a4e38fc3
	I0916 11:07:02.393177   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.393182   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.393185   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.393188   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.393190   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.393347   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-nlhl2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"6ea84b9d-f364-4e26-8dc8-44c3b4d92417","resourceVersion":"433","creationTimestamp":"2024-09-16T11:05:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a8b2d93-d4d7-4d8f-82e4-f4e98c989dd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6776 chars]
	I0916 11:07:02.393769   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.393780   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.393787   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.393791   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.395573   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.395587   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.395591   36333 round_trippers.go:580]     Audit-Id: fe5c1b9e-9bbb-4466-b9ab-09b67895ebcb
	I0916 11:07:02.395594   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.395599   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.395602   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.395605   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.395608   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.395834   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.396086   36333 pod_ready.go:93] pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.396098   36333 pod_ready.go:82] duration metric: took 5.175476ms for pod "coredns-7c65d6cfc9-nlhl2" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.396106   36333 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.396152   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-736061
	I0916 11:07:02.396159   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.396165   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.396170   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.397858   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.397870   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.397881   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.397885   36333 round_trippers.go:580]     Audit-Id: 88d67fa8-09d3-4a9e-bb85-f562c62249ad
	I0916 11:07:02.397889   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.397891   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.397894   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.397900   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.398018   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-736061","namespace":"kube-system","uid":"f946773c-a82f-4e7e-8148-a81b41b27fa9","resourceVersion":"411","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.32:2379","kubernetes.io/config.hash":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.mirror":"69d3e8c6e76d0bc1af3482326f7904d1","kubernetes.io/config.seen":"2024-09-16T11:05:53.622995492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6418 chars]
	I0916 11:07:02.398340   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.398350   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.398357   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.398361   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.399807   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.399820   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.399826   36333 round_trippers.go:580]     Audit-Id: 8d1e2ffe-08a7-405f-bbba-6b02f10eff4e
	I0916 11:07:02.399829   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.399832   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.399834   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.399837   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.399840   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.399968   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.400224   36333 pod_ready.go:93] pod "etcd-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.400236   36333 pod_ready.go:82] duration metric: took 4.124067ms for pod "etcd-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.400248   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.400292   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-736061
	I0916 11:07:02.400300   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.400307   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.400310   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.402003   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.402016   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.402029   36333 round_trippers.go:580]     Audit-Id: 4ce8d656-7178-4a29-8e50-faeac8936832
	I0916 11:07:02.402033   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.402039   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.402043   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.402047   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.402056   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.402380   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-736061","namespace":"kube-system","uid":"bb6b837b-db0a-455d-8055-ec513f470220","resourceVersion":"408","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.32:8443","kubernetes.io/config.hash":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.mirror":"efede0e1597c8cbe70740f3169f7ec4a","kubernetes.io/config.seen":"2024-09-16T11:05:53.622989337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7637 chars]
	I0916 11:07:02.402722   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.402731   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.402738   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.402742   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.404373   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.404388   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.404393   36333 round_trippers.go:580]     Audit-Id: b4e955cd-6e9e-4b71-bdbd-d1481361c6d3
	I0916 11:07:02.404397   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.404400   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.404403   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.404408   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.404412   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.404511   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.404753   36333 pod_ready.go:93] pod "kube-apiserver-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.404765   36333 pod_ready.go:82] duration metric: took 4.50843ms for pod "kube-apiserver-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.404772   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.404811   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-736061
	I0916 11:07:02.404818   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.404825   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.404827   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.406438   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.406453   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.406458   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.406462   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.406464   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.406468   36333 round_trippers.go:580]     Audit-Id: 2e229ef5-d2a5-45dc-b54b-8141e563aadf
	I0916 11:07:02.406472   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.406475   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.406943   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-736061","namespace":"kube-system","uid":"53bb4e69-605c-4160-bf0a-f26e83e16ab1","resourceVersion":"412","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.mirror":"94d3338940ee73a61a5075650d027904","kubernetes.io/config.seen":"2024-09-16T11:05:53.622993259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7198 chars]
	I0916 11:07:02.407323   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:02.407337   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.407344   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.407347   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.408891   36333 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:07:02.408903   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.408908   36333 round_trippers.go:580]     Audit-Id: 3b5a9222-dd01-4ed6-8b24-beacc2f78a04
	I0916 11:07:02.408911   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.408914   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.408916   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.408919   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.408923   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.409185   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:02.409431   36333 pod_ready.go:93] pod "kube-controller-manager-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.409444   36333 pod_ready.go:82] duration metric: took 4.666097ms for pod "kube-controller-manager-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.409453   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8h6jp" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.582842   36333 request.go:632] Waited for 173.330215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h6jp
	I0916 11:07:02.582930   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h6jp
	I0916 11:07:02.582936   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.582944   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.582953   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.585507   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.585526   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.585533   36333 round_trippers.go:580]     Audit-Id: 39bc33b9-b8f3-4c73-8e06-a64def0ea4b9
	I0916 11:07:02.585540   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.585549   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.585553   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.585558   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.585563   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.586152   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h6jp","generateName":"kube-proxy-","namespace":"kube-system","uid":"79ea467a-f17a-49de-8cbb-0f9952e21864","resourceVersion":"505","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"562d5386-4fc3-48d5-983a-19cdfbbddc77","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562d5386-4fc3-48d5-983a-19cdfbbddc77\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6154 chars]
	I0916 11:07:02.781886   36333 request.go:632] Waited for 195.300699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:02.781948   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061-m02
	I0916 11:07:02.781954   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.781961   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.781965   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.784388   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.784406   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.784412   36333 round_trippers.go:580]     Audit-Id: 76125429-8c9e-453b-9d68-cfed320ca02a
	I0916 11:07:02.784416   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.784419   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.784422   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.784424   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.784427   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.784728   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061-m02","uid":"a80acdf2-87a2-4144-bb65-c3105420e4b2","resourceVersion":"525","creationTimestamp":"2024-09-16T11:06:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T11_06_43_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:06:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:
annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manag [truncated 3261 chars]
	I0916 11:07:02.784978   36333 pod_ready.go:93] pod "kube-proxy-8h6jp" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:02.784993   36333 pod_ready.go:82] duration metric: took 375.534709ms for pod "kube-proxy-8h6jp" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.785002   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:02.982155   36333 request.go:632] Waited for 197.065012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftj9p
	I0916 11:07:02.982215   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ftj9p
	I0916 11:07:02.982221   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:02.982229   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:02.982234   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:02.984638   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:02.984658   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:02.984666   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:02.984671   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:02.984675   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:02.984679   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:02 GMT
	I0916 11:07:02.984691   36333 round_trippers.go:580]     Audit-Id: 24f355dc-64d1-4ce9-8a30-3620b98005e0
	I0916 11:07:02.984696   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:02.984963   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ftj9p","generateName":"kube-proxy-","namespace":"kube-system","uid":"fa72720f-1c4a-46a2-a733-f411ccb6f628","resourceVersion":"398","creationTimestamp":"2024-09-16T11:05:58Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"562d5386-4fc3-48d5-983a-19cdfbbddc77","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562d5386-4fc3-48d5-983a-19cdfbbddc77\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6141 chars]
	I0916 11:07:03.182768   36333 request.go:632] Waited for 197.351742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.182860   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.182868   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.182878   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.182883   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.185485   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:03.185505   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.185512   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.185515   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.185518   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.185520   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.185523   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.185525   36333 round_trippers.go:580]     Audit-Id: 4e664e60-eb62-4c62-9da6-17f315eecc83
	I0916 11:07:03.185788   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:03.186122   36333 pod_ready.go:93] pod "kube-proxy-ftj9p" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:03.186137   36333 pod_ready.go:82] duration metric: took 401.129059ms for pod "kube-proxy-ftj9p" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:03.186145   36333 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:03.382173   36333 request.go:632] Waited for 195.968801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:07:03.382232   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-736061
	I0916 11:07:03.382237   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.382244   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.382247   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.384711   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:03.384728   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.384734   36333 round_trippers.go:580]     Audit-Id: 13eda5db-ca65-450b-8fdb-50f6b0c376c8
	I0916 11:07:03.384737   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.384740   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.384742   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.384745   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.384749   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.385194   36333 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-736061","namespace":"kube-system","uid":"25a9a3ee-f264-4bd2-95fc-c8452bedc92b","resourceVersion":"413","creationTimestamp":"2024-09-16T11:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.mirror":"de66983060c1e167c6b9498eb8b0a025","kubernetes.io/config.seen":"2024-09-16T11:05:47.723827022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T11:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4937 chars]
	I0916 11:07:03.581823   36333 request.go:632] Waited for 196.278902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.581908   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes/multinode-736061
	I0916 11:07:03.581916   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.581926   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.581932   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.584158   36333 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:07:03.584181   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.584190   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.584197   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.584201   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.584204   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.584208   36333 round_trippers.go:580]     Audit-Id: d744394e-aee6-473a-b007-feba6b569bd1
	I0916 11:07:03.584212   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.584486   36333 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-09-16T11:05:51Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0916 11:07:03.584811   36333 pod_ready.go:93] pod "kube-scheduler-multinode-736061" in "kube-system" namespace has status "Ready":"True"
	I0916 11:07:03.584828   36333 pod_ready.go:82] duration metric: took 398.676655ms for pod "kube-scheduler-multinode-736061" in "kube-system" namespace to be "Ready" ...
	I0916 11:07:03.584837   36333 pod_ready.go:39] duration metric: took 1.200068546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:07:03.584853   36333 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:07:03.584914   36333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:07:03.601367   36333 system_svc.go:56] duration metric: took 16.505305ms WaitForService to wait for kubelet
	I0916 11:07:03.601396   36333 kubeadm.go:582] duration metric: took 20.368159557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:07:03.601414   36333 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:07:03.782880   36333 request.go:632] Waited for 181.382248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.32:8443/api/v1/nodes
	I0916 11:07:03.782956   36333 round_trippers.go:463] GET https://192.168.39.32:8443/api/v1/nodes
	I0916 11:07:03.782965   36333 round_trippers.go:469] Request Headers:
	I0916 11:07:03.782975   36333 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:07:03.782987   36333 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:07:03.786179   36333 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:07:03.786202   36333 round_trippers.go:577] Response Headers:
	I0916 11:07:03.786210   36333 round_trippers.go:580]     Audit-Id: 567e7d78-c8dc-4af6-9bc6-93ac3ed4acdf
	I0916 11:07:03.786214   36333 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:07:03.786218   36333 round_trippers.go:580]     Content-Type: application/json
	I0916 11:07:03.786225   36333 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4873a14c-8b1c-416d-a551-e8823fbf2705
	I0916 11:07:03.786231   36333 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 31f35ed0-44e7-48e5-832e-a93715d49e3c
	I0916 11:07:03.786235   36333 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:07:03 GMT
	I0916 11:07:03.786493   36333 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"multinode-736061","uid":"b3b28959-9007-4fad-ada7-744a1647b70f","resourceVersion":"416","creationTimestamp":"2024-09-16T11:05:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-736061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-736061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T11_05_54_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10084 chars]
	I0916 11:07:03.786923   36333 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:07:03.786941   36333 node_conditions.go:123] node cpu capacity is 2
	I0916 11:07:03.786952   36333 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:07:03.786957   36333 node_conditions.go:123] node cpu capacity is 2
	I0916 11:07:03.786963   36333 node_conditions.go:105] duration metric: took 185.543392ms to run NodePressure ...
	I0916 11:07:03.786977   36333 start.go:241] waiting for startup goroutines ...
	I0916 11:07:03.787012   36333 start.go:255] writing updated cluster config ...
	I0916 11:07:03.787293   36333 ssh_runner.go:195] Run: rm -f paused
	I0916 11:07:03.796481   36333 out.go:177] * Done! kubectl is now configured to use "multinode-736061" cluster and "default" namespace by default
	E0916 11:07:03.797997   36333 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> CRI-O <==
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.447404446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484930447379131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feb36a4f-1cd3-4b79-a4fa-4210758a3341 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.447864004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e568c95c-0655-4350-afc4-c64e42d9fca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.447922580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e568c95c-0655-4350-afc4-c64e42d9fca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.448111906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e568c95c-0655-4350-afc4-c64e42d9fca7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.486112665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f46c0cf-8c52-4876-816f-561f7b58f0dd name=/runtime.v1.RuntimeService/Version
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.486325374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f46c0cf-8c52-4876-816f-561f7b58f0dd name=/runtime.v1.RuntimeService/Version
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.487537646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb2ad643-0abb-456c-80ff-e27588f7d5b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.487927468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484930487907166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb2ad643-0abb-456c-80ff-e27588f7d5b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.488572030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87dc9c9e-a7ac-4f34-8740-a23a1c4ff8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.488641300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87dc9c9e-a7ac-4f34-8740-a23a1c4ff8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.488836119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87dc9c9e-a7ac-4f34-8740-a23a1c4ff8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.526563417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d699eab5-4c05-46b5-b631-b7668605f391 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.526652938Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d699eab5-4c05-46b5-b631-b7668605f391 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.527566513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b57e6428-52b8-4c6a-b955-1b80040e0b0e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.527972553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484930527949493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b57e6428-52b8-4c6a-b955-1b80040e0b0e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.528458905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=022f6e2d-a68b-4d22-bc0b-8427370e5146 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.528531642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=022f6e2d-a68b-4d22-bc0b-8427370e5146 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.528764568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=022f6e2d-a68b-4d22-bc0b-8427370e5146 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.567266998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b984c1d8-82e0-4ed3-b2ba-0376cc019486 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.567384574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b984c1d8-82e0-4ed3-b2ba-0376cc019486 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.568454117Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e1c02aa-6d59-44cc-935f-a5ff11b4279f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.568858277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484930568834840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e1c02aa-6d59-44cc-935f-a5ff11b4279f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.569360751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a548064-ea4a-4d00-8712-e08bf2cbcb28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.569416935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a548064-ea4a-4d00-8712-e08bf2cbcb28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:08:50 multinode-736061 crio[665]: time="2024-09-16 11:08:50.569644962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726484826321922608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726484771766190138,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726484771695842020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726484759714659550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f074
9-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726484759520358533,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},A
nnotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726484748620274924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map
[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726484748618788280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726484748609822622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726484748471452056,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a548064-ea4a-4d00-8712-e08bf2cbcb28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	84517e6af45b4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   0                   779060032a611       busybox-7dff88458-g9fqk
	840a587a0926e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   0                   19286465f900a       coredns-7c65d6cfc9-nlhl2
	02223ab182498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       0                   01381d4d113d1       storage-provisioner
	7a89ff755837a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               0                   bd141ffff1a91       kindnet-qb4tq
	f8c55edbe2173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                0                   cc5264d1c4b52       kube-proxy-ftj9p
	b76d5d4ad419a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago        Running             kube-scheduler            0                   f771edf6fcef2       kube-scheduler-multinode-736061
	769a75ad1934a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      0                   6237db42cfa9d       etcd-multinode-736061
	d53f9aec7bc35       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago        Running             kube-controller-manager   0                   c1754b1d74547       kube-controller-manager-multinode-736061
	ed73e9089f633       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago        Running             kube-apiserver            0                   06f23871be821       kube-apiserver-multinode-736061
	
	
	==> coredns [840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd] <==
	[INFO] 10.244.1.2:57967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151977s
	[INFO] 10.244.0.3:38411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205732s
	[INFO] 10.244.0.3:48472 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859185s
	[INFO] 10.244.0.3:58999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160969s
	[INFO] 10.244.0.3:35408 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007258s
	[INFO] 10.244.0.3:41914 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221958s
	[INFO] 10.244.0.3:51441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075035s
	[INFO] 10.244.0.3:54367 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064081s
	[INFO] 10.244.0.3:51073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061874s
	[INFO] 10.244.1.2:38827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130826s
	[INFO] 10.244.1.2:49788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142283s
	[INFO] 10.244.1.2:43407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083078s
	[INFO] 10.244.1.2:35506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123825s
	[INFO] 10.244.0.3:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008958s
	[INFO] 10.244.0.3:44801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055108s
	[INFO] 10.244.0.3:45405 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039898s
	[INFO] 10.244.0.3:53790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037364s
	[INFO] 10.244.1.2:44863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136337s
	[INFO] 10.244.1.2:38345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000494388s
	[INFO] 10.244.1.2:36190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000247796s
	[INFO] 10.244.1.2:38755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120111s
	[INFO] 10.244.0.3:58238 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129373s
	[INFO] 10.244.0.3:55519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102337s
	[INFO] 10.244.0.3:60945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061359s
	[INFO] 10.244.0.3:52747 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010905s
	
	
	==> describe nodes <==
	Name:               multinode-736061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:08:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:07:25 +0000   Mon, 16 Sep 2024 11:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-736061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60fe80618d4f42e281d4c50393e9d89e
	  System UUID:                60fe8061-8d4f-42e2-81d4-c50393e9d89e
	  Boot ID:                    d046d280-229f-4e9a-8a6c-1986374da911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g9fqk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 coredns-7c65d6cfc9-nlhl2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m51s
	  kube-system                 etcd-multinode-736061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m57s
	  kube-system                 kindnet-qb4tq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m52s
	  kube-system                 kube-apiserver-multinode-736061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-controller-manager-multinode-736061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-proxy-ftj9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-scheduler-multinode-736061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m3s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m3s)  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x7 over 3m3s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m57s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m57s                kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s                kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s                kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m52s                node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  NodeReady                2m39s                kubelet          Node multinode-736061 status is now: NodeReady
	
	
	Name:               multinode-736061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_06_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:06:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:06:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:06:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:06:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:07:13 +0000   Mon, 16 Sep 2024 11:07:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-736061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fe337504134150bccd557919449b29
	  System UUID:                d4fe3375-0413-4150-bccd-557919449b29
	  Boot ID:                    96a98313-f000-4116-9acc-f37a0a79851e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-754d4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kindnet-xlrxb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m8s
	  kube-system                 kube-proxy-8h6jp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x2 over 2m8s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x2 over 2m8s)  kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x2 over 2m8s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m7s                 node-controller  Node multinode-736061-m02 event: Registered Node multinode-736061-m02 in Controller
	  Normal  NodeReady                109s                 kubelet          Node multinode-736061-m02 status is now: NodeReady
	
	
	Name:               multinode-736061-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_29_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:08:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:08:47 +0000   Mon, 16 Sep 2024 11:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:08:47 +0000   Mon, 16 Sep 2024 11:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:08:47 +0000   Mon, 16 Sep 2024 11:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:08:47 +0000   Mon, 16 Sep 2024 11:08:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-736061-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 890f5eb3683144b2b6dc0b58be15768f
	  System UUID:                890f5eb3-6831-44b2-b6dc-0b58be15768f
	  Boot ID:                    67f6e0ca-4e06-457e-a63b-772f0c7defc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bvqrg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      74s
	  kube-system                 kube-proxy-5hctk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 69s                kube-proxy       
	  Normal  NodeHasSufficientMemory  74s (x2 over 75s)  kubelet          Node multinode-736061-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x2 over 75s)  kubelet          Node multinode-736061-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x2 over 75s)  kubelet          Node multinode-736061-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                55s                kubelet          Node multinode-736061-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)  kubelet          Node multinode-736061-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)  kubelet          Node multinode-736061-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)  kubelet          Node multinode-736061-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node multinode-736061-m03 event: Registered Node multinode-736061-m03 in Controller
	  Normal  NodeReady                3s                 kubelet          Node multinode-736061-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 11:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050701] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.798651] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.481620] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.570862] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.929227] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.065798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064029] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.188943] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.125437] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281577] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.899790] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.897000] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059824] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.997335] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.078309] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.139976] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.076513] kauditd_printk_skb: 18 callbacks suppressed
	[Sep16 11:06] kauditd_printk_skb: 69 callbacks suppressed
	[Sep16 11:07] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24] <==
	{"level":"info","ts":"2024-09-16T11:05:49.385662Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:05:49.386023Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.386158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:05:49.388969Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.389717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-16T11:05:49.389814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.389896Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.389930Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:05:49.390126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:05:49.390157Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:05:49.392766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.393463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:06:03.777149Z","caller":"traceutil/trace.go:171","msg":"trace[927915415] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"125.996547ms","start":"2024-09-16T11:06:03.651108Z","end":"2024-09-16T11:06:03.777104Z","steps":["trace[927915415] 'process raft request'  (duration: 125.663993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:06:42.434928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.290318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316539574759162275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" value_size:642 lease:7316539574759161296 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T11:06:42.435173Z","caller":"traceutil/trace.go:171","msg":"trace[736335181] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"242.745028ms","start":"2024-09-16T11:06:42.192402Z","end":"2024-09-16T11:06:42.435147Z","steps":["trace[736335181] 'process raft request'  (duration: 86.752839ms)","trace[736335181] 'compare'  (duration: 155.030741ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:06:42.435488Z","caller":"traceutil/trace.go:171","msg":"trace[1491776336] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"164.53116ms","start":"2024-09-16T11:06:42.270945Z","end":"2024-09-16T11:06:42.435476Z","steps":["trace[1491776336] 'process raft request'  (duration: 164.128437ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:36.191017Z","caller":"traceutil/trace.go:171","msg":"trace[1370350330] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"135.211812ms","start":"2024-09-16T11:07:36.055773Z","end":"2024-09-16T11:07:36.190985Z","steps":["trace[1370350330] 'read index received'  (duration: 127.332155ms)","trace[1370350330] 'applied index is now lower than readState.Index'  (duration: 7.878564ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:36.191190Z","caller":"traceutil/trace.go:171","msg":"trace[1606896706] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"230.440734ms","start":"2024-09-16T11:07:35.960732Z","end":"2024-09-16T11:07:36.191172Z","steps":["trace[1606896706] 'process raft request'  (duration: 222.394697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:07:36.191504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.712787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T11:07:36.191575Z","caller":"traceutil/trace.go:171","msg":"trace[641878152] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:0; response_revision:598; }","duration":"135.807158ms","start":"2024-09-16T11:07:36.055751Z","end":"2024-09-16T11:07:36.191558Z","steps":["trace[641878152] 'agreement among raft nodes before linearized reading'  (duration: 135.656463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:43.320131Z","caller":"traceutil/trace.go:171","msg":"trace[1026367264] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:677; }","duration":"256.510329ms","start":"2024-09-16T11:07:43.063604Z","end":"2024-09-16T11:07:43.320115Z","steps":["trace[1026367264] 'read index received'  (duration: 208.747621ms)","trace[1026367264] 'applied index is now lower than readState.Index'  (duration: 47.76201ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:43.320580Z","caller":"traceutil/trace.go:171","msg":"trace[845413732] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"283.063625ms","start":"2024-09-16T11:07:43.037497Z","end":"2024-09-16T11:07:43.320560Z","steps":["trace[845413732] 'process raft request'  (duration: 234.904981ms)","trace[845413732] 'compare'  (duration: 47.473062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:07:43.320947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.339861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-16T11:07:43.321022Z","caller":"traceutil/trace.go:171","msg":"trace[1372162398] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:1; response_revision:640; }","duration":"257.429414ms","start":"2024-09-16T11:07:43.063585Z","end":"2024-09-16T11:07:43.321014Z","steps":["trace[1372162398] 'agreement among raft nodes before linearized reading'  (duration: 257.097073ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:32.848686Z","caller":"traceutil/trace.go:171","msg":"trace[1433849770] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"176.13666ms","start":"2024-09-16T11:08:32.672526Z","end":"2024-09-16T11:08:32.848663Z","steps":["trace[1433849770] 'process raft request'  (duration: 175.720453ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:08:50 up 3 min,  0 users,  load average: 0.16, 0.23, 0.10
	Linux multinode-736061 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0] <==
	I0916 11:08:20.881382       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:08:20.881519       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	I0916 11:08:20.881756       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:08:20.881860       1 main.go:299] handling current node
	I0916 11:08:20.881979       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:08:20.882076       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:08:30.877693       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:08:30.877795       1 main.go:299] handling current node
	I0916 11:08:30.877831       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:08:30.877849       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:08:30.877999       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:08:30.878021       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:08:30.878095       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.168.39.60 Flags: [] Table: 0} 
	I0916 11:08:40.878377       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:08:40.878495       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:08:40.878711       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:08:40.878744       1 main.go:299] handling current node
	I0916 11:08:40.878776       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:08:40.878793       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:08:50.878055       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:08:50.878124       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:08:50.878258       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:08:50.878265       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:08:50.878362       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:08:50.878370       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7] <==
	I0916 11:05:52.165415       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:05:52.169921       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:05:52.169932       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:05:52.809057       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:05:52.859716       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:05:52.992808       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:05:53.012050       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.32]
	I0916 11:05:53.013006       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:05:53.027136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:05:53.217214       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:05:53.730360       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:05:53.742097       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:05:53.752008       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:05:58.672170       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:05:58.866528       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 11:07:07.434739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53462: use of closed network connection
	E0916 11:07:07.613512       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53474: use of closed network connection
	E0916 11:07:07.861059       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53488: use of closed network connection
	E0916 11:07:08.036468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53502: use of closed network connection
	E0916 11:07:08.198997       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53518: use of closed network connection
	E0916 11:07:08.379195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53544: use of closed network connection
	E0916 11:07:08.653676       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53564: use of closed network connection
	E0916 11:07:08.827028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53588: use of closed network connection
	E0916 11:07:08.989872       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53602: use of closed network connection
	E0916 11:07:09.164411       1 conn.go:339] Error on socket receive: read tcp 192.168.39.32:8443->192.168.39.1:53616: use of closed network connection
	
	
	==> kube-controller-manager [d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba] <==
	I0916 11:07:38.043165       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-736061-m03"
	I0916 11:07:38.154625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:46.364908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:56.014374       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:07:56.014402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:56.025246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:07:58.061011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:06.911552       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:27.052009       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:27.068836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:27.299944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:27.299986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.498604       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:08:28.499795       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:28.530214       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.4.0/24"]
	I0916 11:08:28.530257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.530321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.812678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:29.131881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:33.111007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:38.696548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:47.211278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:48.081832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-proxy [f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:05:59.852422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:05:59.886836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:05:59.886976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:05:59.944125       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:05:59.944160       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:05:59.944181       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:05:59.947733       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:05:59.948149       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:05:59.948393       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:05:59.949794       1 config.go:199] "Starting service config controller"
	I0916 11:05:59.949862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:05:59.950230       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:05:59.950374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:05:59.950923       1 config.go:328] "Starting node config controller"
	I0916 11:05:59.952219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:06:00.050768       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:06:00.050862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:06:00.052567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762] <==
	W0916 11:05:52.226221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:05:52.226438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.286013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:05:52.286065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.292630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:05:52.292712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.303069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:05:52.303177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.308000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:05:52.308078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.326647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.326746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.367616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:05:52.367800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.407350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:05:52.407398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.423030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:05:52.423081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.501395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.501587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.597443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.597573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.652519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:05:52.652625       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:05:55.090829       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:07:13 multinode-736061 kubelet[1226]: E0916 11:07:13.722211    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484833721896310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:13 multinode-736061 kubelet[1226]: E0916 11:07:13.722246    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484833721896310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:23 multinode-736061 kubelet[1226]: E0916 11:07:23.723259    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484843722989186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:23 multinode-736061 kubelet[1226]: E0916 11:07:23.724200    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484843722989186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:33 multinode-736061 kubelet[1226]: E0916 11:07:33.726192    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484853725795872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:33 multinode-736061 kubelet[1226]: E0916 11:07:33.726261    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484853725795872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:43 multinode-736061 kubelet[1226]: E0916 11:07:43.729464    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484863727881449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:43 multinode-736061 kubelet[1226]: E0916 11:07:43.729812    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484863727881449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:53 multinode-736061 kubelet[1226]: E0916 11:07:53.716929    1226 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:07:53 multinode-736061 kubelet[1226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:07:53 multinode-736061 kubelet[1226]: E0916 11:07:53.730823    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484873730628806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:07:53 multinode-736061 kubelet[1226]: E0916 11:07:53.730844    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484873730628806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:03 multinode-736061 kubelet[1226]: E0916 11:08:03.732259    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484883731746872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:03 multinode-736061 kubelet[1226]: E0916 11:08:03.732324    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484883731746872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:13 multinode-736061 kubelet[1226]: E0916 11:08:13.733801    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484893733438003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:13 multinode-736061 kubelet[1226]: E0916 11:08:13.734078    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484893733438003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:23 multinode-736061 kubelet[1226]: E0916 11:08:23.735403    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484903735057856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:23 multinode-736061 kubelet[1226]: E0916 11:08:23.735833    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484903735057856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:33 multinode-736061 kubelet[1226]: E0916 11:08:33.738918    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484913738610956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:33 multinode-736061 kubelet[1226]: E0916 11:08:33.739062    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484913738610956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:43 multinode-736061 kubelet[1226]: E0916 11:08:43.740827    1226 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484923740563178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:08:43 multinode-736061 kubelet[1226]: E0916 11:08:43.741124    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726484923740563178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-736061 -n multinode-736061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (473.04µs)
helpers_test.go:263: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/StartAfterStop (41.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-736061
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-736061
E0916 11:10:08.820929   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-736061: exit status 82 (2m1.859121906s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-736061-m03"  ...
	* Stopping node "multinode-736061-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-736061" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736061 --wait=true -v=8 --alsologtostderr
E0916 11:11:28.278535   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736061 --wait=true -v=8 --alsologtostderr: (3m14.564402841s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-736061
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-736061 -n multinode-736061
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 logs -n 25: (1.503030998s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m02_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m03 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp testdata/cp-test.txt                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m03_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02:/home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m02 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-736061 node stop m03                                                          | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| node    | multinode-736061 node start                                                             | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| stop    | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| start   | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:10:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:10:53.764405   40135 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:10:53.764697   40135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:53.764708   40135 out.go:358] Setting ErrFile to fd 2...
	I0916 11:10:53.764714   40135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:53.764934   40135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:10:53.765527   40135 out.go:352] Setting JSON to false
	I0916 11:10:53.766415   40135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3204,"bootTime":1726481850,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:10:53.766501   40135 start.go:139] virtualization: kvm guest
	I0916 11:10:53.768975   40135 out.go:177] * [multinode-736061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:10:53.770599   40135 notify.go:220] Checking for updates...
	I0916 11:10:53.770619   40135 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:10:53.772102   40135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:10:53.773841   40135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:10:53.775207   40135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:10:53.776414   40135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:10:53.777635   40135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:10:53.779515   40135 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:10:53.779637   40135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:10:53.780265   40135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:10:53.780320   40135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:53.800988   40135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
	I0916 11:10:53.801446   40135 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:53.801971   40135 main.go:141] libmachine: Using API Version  1
	I0916 11:10:53.801999   40135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:53.802338   40135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:53.802498   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.837831   40135 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 11:10:53.839032   40135 start.go:297] selected driver: kvm2
	I0916 11:10:53.839047   40135 start.go:901] validating driver "kvm2" against &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:53.839202   40135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:10:53.839496   40135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:53.839555   40135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:10:53.854668   40135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:10:53.855622   40135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:53.855664   40135 cni.go:84] Creating CNI manager for ""
	I0916 11:10:53.855731   40135 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 11:10:53.855806   40135 start.go:340] cluster config:
	{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:53.856022   40135 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:53.857966   40135 out.go:177] * Starting "multinode-736061" primary control-plane node in "multinode-736061" cluster
	I0916 11:10:53.859309   40135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:10:53.859342   40135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:10:53.859351   40135 cache.go:56] Caching tarball of preloaded images
	I0916 11:10:53.859419   40135 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:10:53.859428   40135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:10:53.859533   40135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:10:53.859726   40135 start.go:360] acquireMachinesLock for multinode-736061: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:10:53.859765   40135 start.go:364] duration metric: took 22.859µs to acquireMachinesLock for "multinode-736061"
	I0916 11:10:53.859779   40135 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:10:53.859786   40135 fix.go:54] fixHost starting: 
	I0916 11:10:53.860046   40135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:10:53.860077   40135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:53.874501   40135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0916 11:10:53.874913   40135 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:53.875410   40135 main.go:141] libmachine: Using API Version  1
	I0916 11:10:53.875431   40135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:53.875784   40135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:53.876057   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.876221   40135 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:10:53.877667   40135 fix.go:112] recreateIfNeeded on multinode-736061: state=Running err=<nil>
	W0916 11:10:53.877684   40135 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:10:53.880136   40135 out.go:177] * Updating the running kvm2 "multinode-736061" VM ...
	I0916 11:10:53.881210   40135 machine.go:93] provisionDockerMachine start ...
	I0916 11:10:53.881232   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.881421   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:53.883804   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:53.884294   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:53.884322   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:53.884407   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:53.884550   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:53.884689   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:53.884816   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:53.884984   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:53.885237   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:53.885252   40135 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:10:54.002517   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:10:54.002554   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.002793   40135 buildroot.go:166] provisioning hostname "multinode-736061"
	I0916 11:10:54.002819   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.003040   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.006032   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.006431   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.006466   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.006567   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.006771   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.006940   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.007101   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.007282   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.007489   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.007510   40135 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061 && echo "multinode-736061" | sudo tee /etc/hostname
	I0916 11:10:54.134028   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:10:54.134063   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.136916   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.137328   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.137354   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.137561   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.137782   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.137967   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.138136   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.138312   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.138554   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.138581   40135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:10:54.254218   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:10:54.254244   40135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:10:54.254262   40135 buildroot.go:174] setting up certificates
	I0916 11:10:54.254271   40135 provision.go:84] configureAuth start
	I0916 11:10:54.254279   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.254544   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:10:54.256878   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.257288   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.257330   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.257423   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.259620   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.259953   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.259972   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.260142   40135 provision.go:143] copyHostCerts
	I0916 11:10:54.260180   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:10:54.260205   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:10:54.260213   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:10:54.260282   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:10:54.260354   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:10:54.260374   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:10:54.260383   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:10:54.260419   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:10:54.260483   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:10:54.260506   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:10:54.260513   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:10:54.260536   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:10:54.260618   40135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-736061]
	I0916 11:10:54.392345   40135 provision.go:177] copyRemoteCerts
	I0916 11:10:54.392409   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:10:54.392437   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.394792   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.395075   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.395103   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.395239   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.395432   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.395580   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.395718   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:10:54.480886   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:10:54.480971   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:10:54.507550   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:10:54.507629   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:10:54.534283   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:10:54.534359   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:10:54.560933   40135 provision.go:87] duration metric: took 306.650302ms to configureAuth
	I0916 11:10:54.560963   40135 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:10:54.561214   40135 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:10:54.561286   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.564044   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.564377   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.564402   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.564575   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.564740   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.564908   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.565050   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.565204   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.565427   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.565450   40135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:12:25.365214   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:12:25.365240   40135 machine.go:96] duration metric: took 1m31.484014406s to provisionDockerMachine
	I0916 11:12:25.365255   40135 start.go:293] postStartSetup for "multinode-736061" (driver="kvm2")
	I0916 11:12:25.365269   40135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:25.365291   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.365801   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:25.365839   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.369181   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.369666   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.369698   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.369949   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.370163   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.370371   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.370519   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.457301   40135 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:25.461731   40135 command_runner.go:130] > NAME=Buildroot
	I0916 11:12:25.461752   40135 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:12:25.461757   40135 command_runner.go:130] > ID=buildroot
	I0916 11:12:25.461762   40135 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:12:25.461767   40135 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:12:25.461812   40135 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:12:25.461826   40135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:12:25.461899   40135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:12:25.461981   40135 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:12:25.461992   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:12:25.462072   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:25.472346   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:12:25.497363   40135 start.go:296] duration metric: took 132.094435ms for postStartSetup
	I0916 11:12:25.497437   40135 fix.go:56] duration metric: took 1m31.637627262s for fixHost
	I0916 11:12:25.497463   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.500226   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.500581   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.500610   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.500790   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.500971   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.501144   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.501372   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.501535   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:25.501715   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:12:25.501724   40135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:12:25.609971   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726485145.588914028
	
	I0916 11:12:25.609991   40135 fix.go:216] guest clock: 1726485145.588914028
	I0916 11:12:25.609998   40135 fix.go:229] Guest: 2024-09-16 11:12:25.588914028 +0000 UTC Remote: 2024-09-16 11:12:25.497444489 +0000 UTC m=+91.767542385 (delta=91.469539ms)
	I0916 11:12:25.610017   40135 fix.go:200] guest clock delta is within tolerance: 91.469539ms
	I0916 11:12:25.610022   40135 start.go:83] releasing machines lock for "multinode-736061", held for 1m31.750248345s
	I0916 11:12:25.610039   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.610285   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:12:25.613333   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.613834   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.613871   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.614019   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614475   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614637   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614712   40135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:25.614767   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.614820   40135 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:25.614838   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.617271   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617637   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617681   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.617697   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617822   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.617976   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.618123   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.618147   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.618163   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.618311   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.618338   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.618453   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.618578   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.618694   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.726440   40135 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:12:25.727099   40135 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 11:12:25.727256   40135 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:25.733715   40135 command_runner.go:130] > systemd 252 (252)
	I0916 11:12:25.733759   40135 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 11:12:25.733826   40135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:12:25.889015   40135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:25.896686   40135 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:12:25.897147   40135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:12:25.897213   40135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:25.906774   40135 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:12:25.906798   40135 start.go:495] detecting cgroup driver to use...
	I0916 11:12:25.906866   40135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:12:25.924150   40135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:12:25.938696   40135 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:25.938749   40135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:25.952927   40135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:25.967295   40135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:26.111243   40135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:26.252238   40135 docker.go:233] disabling docker service ...
	I0916 11:12:26.252310   40135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:26.269485   40135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:26.283580   40135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:26.423452   40135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:26.564033   40135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:26.578149   40135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:26.597842   40135 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:12:26.597888   40135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:12:26.597941   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.608772   40135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:12:26.608829   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.620194   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.631946   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.642904   40135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:26.653934   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.664685   40135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.676602   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.687924   40135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:26.698235   40135 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 11:12:26.698315   40135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:26.708091   40135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:26.843091   40135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:12:27.073301   40135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:12:27.073360   40135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:12:27.078455   40135 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:12:27.078472   40135 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:12:27.078478   40135 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0916 11:12:27.078485   40135 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:12:27.078490   40135 command_runner.go:130] > Access: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078504   40135 command_runner.go:130] > Modify: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078510   40135 command_runner.go:130] > Change: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078517   40135 command_runner.go:130] >  Birth: -
	I0916 11:12:27.078806   40135 start.go:563] Will wait 60s for crictl version
	I0916 11:12:27.078852   40135 ssh_runner.go:195] Run: which crictl
	I0916 11:12:27.082760   40135 command_runner.go:130] > /usr/bin/crictl
	I0916 11:12:27.082812   40135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:27.121054   40135 command_runner.go:130] > Version:  0.1.0
	I0916 11:12:27.121076   40135 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:12:27.121081   40135 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:12:27.121086   40135 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:12:27.121338   40135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:12:27.121408   40135 ssh_runner.go:195] Run: crio --version
	I0916 11:12:27.151162   40135 command_runner.go:130] > crio version 1.29.1
	I0916 11:12:27.151185   40135 command_runner.go:130] > Version:        1.29.1
	I0916 11:12:27.151194   40135 command_runner.go:130] > GitCommit:      unknown
	I0916 11:12:27.151201   40135 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:12:27.151206   40135 command_runner.go:130] > GitTreeState:   clean
	I0916 11:12:27.151214   40135 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:12:27.151221   40135 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:12:27.151227   40135 command_runner.go:130] > Compiler:       gc
	I0916 11:12:27.151233   40135 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:12:27.151239   40135 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:12:27.151249   40135 command_runner.go:130] > BuildTags:      
	I0916 11:12:27.151260   40135 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:12:27.151266   40135 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:12:27.151273   40135 command_runner.go:130] >   btrfs_noversion
	I0916 11:12:27.151280   40135 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:12:27.151289   40135 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:12:27.151295   40135 command_runner.go:130] >   seccomp
	I0916 11:12:27.151304   40135 command_runner.go:130] > LDFlags:          unknown
	I0916 11:12:27.151310   40135 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:12:27.151321   40135 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:12:27.151405   40135 ssh_runner.go:195] Run: crio --version
	I0916 11:12:27.181636   40135 command_runner.go:130] > crio version 1.29.1
	I0916 11:12:27.181664   40135 command_runner.go:130] > Version:        1.29.1
	I0916 11:12:27.181673   40135 command_runner.go:130] > GitCommit:      unknown
	I0916 11:12:27.181679   40135 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:12:27.181687   40135 command_runner.go:130] > GitTreeState:   clean
	I0916 11:12:27.181696   40135 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:12:27.181702   40135 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:12:27.181708   40135 command_runner.go:130] > Compiler:       gc
	I0916 11:12:27.181715   40135 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:12:27.181722   40135 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:12:27.181728   40135 command_runner.go:130] > BuildTags:      
	I0916 11:12:27.181736   40135 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:12:27.181742   40135 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:12:27.181752   40135 command_runner.go:130] >   btrfs_noversion
	I0916 11:12:27.181763   40135 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:12:27.181770   40135 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:12:27.181778   40135 command_runner.go:130] >   seccomp
	I0916 11:12:27.181786   40135 command_runner.go:130] > LDFlags:          unknown
	I0916 11:12:27.181796   40135 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:12:27.181802   40135 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:12:27.183887   40135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:12:27.185243   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:12:27.187794   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:27.188123   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:27.188146   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:27.188367   40135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:27.192571   40135 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 11:12:27.192739   40135 kubeadm.go:883] updating cluster {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:27.192900   40135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:12:27.192958   40135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:27.238779   40135 command_runner.go:130] > {
	I0916 11:12:27.238813   40135 command_runner.go:130] >   "images": [
	I0916 11:12:27.238818   40135 command_runner.go:130] >     {
	I0916 11:12:27.238825   40135 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:12:27.238830   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.238836   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:12:27.238839   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238844   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.238852   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:12:27.238859   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:12:27.238863   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238870   40135 command_runner.go:130] >       "size": "87190579",
	I0916 11:12:27.238877   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.238884   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.238893   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.238907   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.238911   40135 command_runner.go:130] >     },
	I0916 11:12:27.238915   40135 command_runner.go:130] >     {
	I0916 11:12:27.238921   40135 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:12:27.238926   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.238931   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:12:27.238935   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238939   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.238947   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:12:27.238958   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:12:27.238969   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238976   40135 command_runner.go:130] >       "size": "1363676",
	I0916 11:12:27.238982   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.238991   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239000   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239006   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239012   40135 command_runner.go:130] >     },
	I0916 11:12:27.239019   40135 command_runner.go:130] >     {
	I0916 11:12:27.239025   40135 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:12:27.239029   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239034   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:12:27.239041   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239047   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239063   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:12:27.239078   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:12:27.239087   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239093   40135 command_runner.go:130] >       "size": "31470524",
	I0916 11:12:27.239103   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239109   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239116   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239121   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239129   40135 command_runner.go:130] >     },
	I0916 11:12:27.239135   40135 command_runner.go:130] >     {
	I0916 11:12:27.239149   40135 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:12:27.239158   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239168   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:12:27.239176   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239183   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239196   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:12:27.239213   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:12:27.239222   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239229   40135 command_runner.go:130] >       "size": "63273227",
	I0916 11:12:27.239238   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239245   40135 command_runner.go:130] >       "username": "nonroot",
	I0916 11:12:27.239254   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239264   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239272   40135 command_runner.go:130] >     },
	I0916 11:12:27.239277   40135 command_runner.go:130] >     {
	I0916 11:12:27.239286   40135 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:12:27.239291   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239300   40135 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:12:27.239309   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239316   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239329   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:12:27.239343   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:12:27.239351   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239358   40135 command_runner.go:130] >       "size": "149009664",
	I0916 11:12:27.239366   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239370   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239375   40135 command_runner.go:130] >       },
	I0916 11:12:27.239381   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239390   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239397   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239404   40135 command_runner.go:130] >     },
	I0916 11:12:27.239409   40135 command_runner.go:130] >     {
	I0916 11:12:27.239420   40135 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:12:27.239430   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239438   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:12:27.239447   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239452   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239463   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:12:27.239475   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:12:27.239484   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239493   40135 command_runner.go:130] >       "size": "95237600",
	I0916 11:12:27.239502   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239508   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239516   40135 command_runner.go:130] >       },
	I0916 11:12:27.239524   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239532   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239538   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239545   40135 command_runner.go:130] >     },
	I0916 11:12:27.239550   40135 command_runner.go:130] >     {
	I0916 11:12:27.239562   40135 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:12:27.239571   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239580   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:12:27.239589   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239596   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239611   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:12:27.239627   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:12:27.239635   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239639   40135 command_runner.go:130] >       "size": "89437508",
	I0916 11:12:27.239644   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239651   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239658   40135 command_runner.go:130] >       },
	I0916 11:12:27.239665   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239674   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239681   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239689   40135 command_runner.go:130] >     },
	I0916 11:12:27.239695   40135 command_runner.go:130] >     {
	I0916 11:12:27.239709   40135 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:12:27.239716   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239724   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:12:27.239728   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239735   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239758   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:12:27.239773   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:12:27.239779   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239790   40135 command_runner.go:130] >       "size": "92733849",
	I0916 11:12:27.239799   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239806   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239810   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239815   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239822   40135 command_runner.go:130] >     },
	I0916 11:12:27.239826   40135 command_runner.go:130] >     {
	I0916 11:12:27.239836   40135 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:12:27.239842   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239848   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:12:27.239854   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239860   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239871   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:12:27.239883   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:12:27.239889   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239895   40135 command_runner.go:130] >       "size": "68420934",
	I0916 11:12:27.239904   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239910   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239918   40135 command_runner.go:130] >       },
	I0916 11:12:27.239922   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239928   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239937   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239946   40135 command_runner.go:130] >     },
	I0916 11:12:27.239954   40135 command_runner.go:130] >     {
	I0916 11:12:27.239967   40135 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:12:27.239978   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239988   40135 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:12:27.239997   40135 command_runner.go:130] >       ],
	I0916 11:12:27.240004   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.240013   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:12:27.240027   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:12:27.240036   40135 command_runner.go:130] >       ],
	I0916 11:12:27.240046   40135 command_runner.go:130] >       "size": "742080",
	I0916 11:12:27.240054   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.240063   40135 command_runner.go:130] >         "value": "65535"
	I0916 11:12:27.240071   40135 command_runner.go:130] >       },
	I0916 11:12:27.240079   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.240087   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.240091   40135 command_runner.go:130] >       "pinned": true
	I0916 11:12:27.240097   40135 command_runner.go:130] >     }
	I0916 11:12:27.240102   40135 command_runner.go:130] >   ]
	I0916 11:12:27.240109   40135 command_runner.go:130] > }
	I0916 11:12:27.240330   40135 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:12:27.240345   40135 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:12:27.240399   40135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:27.285112   40135 command_runner.go:130] > {
	I0916 11:12:27.285150   40135 command_runner.go:130] >   "images": [
	I0916 11:12:27.285157   40135 command_runner.go:130] >     {
	I0916 11:12:27.285170   40135 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:12:27.285177   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285185   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:12:27.285190   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285197   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285211   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:12:27.285224   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:12:27.285229   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285240   40135 command_runner.go:130] >       "size": "87190579",
	I0916 11:12:27.285250   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285257   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285271   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285279   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285283   40135 command_runner.go:130] >     },
	I0916 11:12:27.285288   40135 command_runner.go:130] >     {
	I0916 11:12:27.285301   40135 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:12:27.285308   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285319   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:12:27.285331   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285341   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285356   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:12:27.285367   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:12:27.285374   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285381   40135 command_runner.go:130] >       "size": "1363676",
	I0916 11:12:27.285389   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285399   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285407   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285414   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285423   40135 command_runner.go:130] >     },
	I0916 11:12:27.285428   40135 command_runner.go:130] >     {
	I0916 11:12:27.285441   40135 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:12:27.285450   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285460   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:12:27.285467   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285472   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285480   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:12:27.285490   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:12:27.285496   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285500   40135 command_runner.go:130] >       "size": "31470524",
	I0916 11:12:27.285506   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285510   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285515   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285521   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285524   40135 command_runner.go:130] >     },
	I0916 11:12:27.285528   40135 command_runner.go:130] >     {
	I0916 11:12:27.285534   40135 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:12:27.285540   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285547   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:12:27.285552   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285556   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285563   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:12:27.285577   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:12:27.285582   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285586   40135 command_runner.go:130] >       "size": "63273227",
	I0916 11:12:27.285591   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285596   40135 command_runner.go:130] >       "username": "nonroot",
	I0916 11:12:27.285602   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285606   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285610   40135 command_runner.go:130] >     },
	I0916 11:12:27.285613   40135 command_runner.go:130] >     {
	I0916 11:12:27.285619   40135 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:12:27.285624   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285628   40135 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:12:27.285631   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285635   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285644   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:12:27.285651   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:12:27.285656   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285661   40135 command_runner.go:130] >       "size": "149009664",
	I0916 11:12:27.285664   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285668   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285671   40135 command_runner.go:130] >       },
	I0916 11:12:27.285675   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285680   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285685   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285689   40135 command_runner.go:130] >     },
	I0916 11:12:27.285692   40135 command_runner.go:130] >     {
	I0916 11:12:27.285698   40135 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:12:27.285704   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285709   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:12:27.285712   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285716   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285723   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:12:27.285731   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:12:27.285737   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285741   40135 command_runner.go:130] >       "size": "95237600",
	I0916 11:12:27.285745   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285749   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285752   40135 command_runner.go:130] >       },
	I0916 11:12:27.285756   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285760   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285764   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285767   40135 command_runner.go:130] >     },
	I0916 11:12:27.285771   40135 command_runner.go:130] >     {
	I0916 11:12:27.285777   40135 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:12:27.285781   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285787   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:12:27.285796   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285800   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285808   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:12:27.285816   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:12:27.285821   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285825   40135 command_runner.go:130] >       "size": "89437508",
	I0916 11:12:27.285829   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285835   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285839   40135 command_runner.go:130] >       },
	I0916 11:12:27.285843   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285847   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285851   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285854   40135 command_runner.go:130] >     },
	I0916 11:12:27.285857   40135 command_runner.go:130] >     {
	I0916 11:12:27.285865   40135 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:12:27.285869   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285875   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:12:27.285878   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285882   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285904   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:12:27.285914   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:12:27.285918   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285923   40135 command_runner.go:130] >       "size": "92733849",
	I0916 11:12:27.285926   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285930   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285934   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285938   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285941   40135 command_runner.go:130] >     },
	I0916 11:12:27.285944   40135 command_runner.go:130] >     {
	I0916 11:12:27.285951   40135 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:12:27.285956   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285961   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:12:27.285964   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285968   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285975   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:12:27.285984   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:12:27.285987   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285992   40135 command_runner.go:130] >       "size": "68420934",
	I0916 11:12:27.285998   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.286002   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.286005   40135 command_runner.go:130] >       },
	I0916 11:12:27.286009   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.286013   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.286017   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.286022   40135 command_runner.go:130] >     },
	I0916 11:12:27.286027   40135 command_runner.go:130] >     {
	I0916 11:12:27.286033   40135 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:12:27.286040   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.286044   40135 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:12:27.286050   40135 command_runner.go:130] >       ],
	I0916 11:12:27.286054   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.286061   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:12:27.286069   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:12:27.286074   40135 command_runner.go:130] >       ],
	I0916 11:12:27.286080   40135 command_runner.go:130] >       "size": "742080",
	I0916 11:12:27.286084   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.286090   40135 command_runner.go:130] >         "value": "65535"
	I0916 11:12:27.286094   40135 command_runner.go:130] >       },
	I0916 11:12:27.286098   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.286101   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.286107   40135 command_runner.go:130] >       "pinned": true
	I0916 11:12:27.286111   40135 command_runner.go:130] >     }
	I0916 11:12:27.286114   40135 command_runner.go:130] >   ]
	I0916 11:12:27.286117   40135 command_runner.go:130] > }
	I0916 11:12:27.286227   40135 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:12:27.286237   40135 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:27.286244   40135 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 11:12:27.286331   40135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:27.286392   40135 ssh_runner.go:195] Run: crio config
	I0916 11:12:27.326001   40135 command_runner.go:130] ! time="2024-09-16 11:12:27.304932753Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 11:12:27.332712   40135 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 11:12:27.346533   40135 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 11:12:27.346557   40135 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 11:12:27.346564   40135 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 11:12:27.346567   40135 command_runner.go:130] > #
	I0916 11:12:27.346573   40135 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 11:12:27.346580   40135 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 11:12:27.346585   40135 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 11:12:27.346594   40135 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 11:12:27.346599   40135 command_runner.go:130] > # reload'.
	I0916 11:12:27.346605   40135 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 11:12:27.346611   40135 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 11:12:27.346617   40135 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 11:12:27.346625   40135 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 11:12:27.346629   40135 command_runner.go:130] > [crio]
	I0916 11:12:27.346634   40135 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 11:12:27.346641   40135 command_runner.go:130] > # containers images, in this directory.
	I0916 11:12:27.346646   40135 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 11:12:27.346655   40135 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 11:12:27.346674   40135 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 11:12:27.346683   40135 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 11:12:27.346690   40135 command_runner.go:130] > # imagestore = ""
	I0916 11:12:27.346696   40135 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 11:12:27.346705   40135 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 11:12:27.346710   40135 command_runner.go:130] > storage_driver = "overlay"
	I0916 11:12:27.346716   40135 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 11:12:27.346723   40135 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 11:12:27.346730   40135 command_runner.go:130] > storage_option = [
	I0916 11:12:27.346736   40135 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 11:12:27.346742   40135 command_runner.go:130] > ]
	I0916 11:12:27.346748   40135 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 11:12:27.346756   40135 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 11:12:27.346762   40135 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 11:12:27.346769   40135 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 11:12:27.346775   40135 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 11:12:27.346782   40135 command_runner.go:130] > # always happen on a node reboot
	I0916 11:12:27.346787   40135 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 11:12:27.346797   40135 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 11:12:27.346805   40135 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 11:12:27.346811   40135 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 11:12:27.346818   40135 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 11:12:27.346825   40135 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 11:12:27.346834   40135 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 11:12:27.346840   40135 command_runner.go:130] > # internal_wipe = true
	I0916 11:12:27.346849   40135 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 11:12:27.346856   40135 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 11:12:27.346863   40135 command_runner.go:130] > # internal_repair = false
	I0916 11:12:27.346874   40135 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 11:12:27.346883   40135 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 11:12:27.346890   40135 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 11:12:27.346897   40135 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 11:12:27.346904   40135 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 11:12:27.346909   40135 command_runner.go:130] > [crio.api]
	I0916 11:12:27.346915   40135 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 11:12:27.346921   40135 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 11:12:27.346927   40135 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 11:12:27.346933   40135 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 11:12:27.346940   40135 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 11:12:27.346947   40135 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 11:12:27.346951   40135 command_runner.go:130] > # stream_port = "0"
	I0916 11:12:27.346957   40135 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 11:12:27.346964   40135 command_runner.go:130] > # stream_enable_tls = false
	I0916 11:12:27.346970   40135 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 11:12:27.346976   40135 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 11:12:27.346982   40135 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 11:12:27.346990   40135 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 11:12:27.346995   40135 command_runner.go:130] > # minutes.
	I0916 11:12:27.346999   40135 command_runner.go:130] > # stream_tls_cert = ""
	I0916 11:12:27.347007   40135 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 11:12:27.347015   40135 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 11:12:27.347021   40135 command_runner.go:130] > # stream_tls_key = ""
	I0916 11:12:27.347026   40135 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 11:12:27.347034   40135 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 11:12:27.347049   40135 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 11:12:27.347055   40135 command_runner.go:130] > # stream_tls_ca = ""
	I0916 11:12:27.347065   40135 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:12:27.347071   40135 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 11:12:27.347078   40135 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:12:27.347085   40135 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 11:12:27.347091   40135 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 11:12:27.347099   40135 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 11:12:27.347105   40135 command_runner.go:130] > [crio.runtime]
	I0916 11:12:27.347111   40135 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 11:12:27.347118   40135 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 11:12:27.347124   40135 command_runner.go:130] > # "nofile=1024:2048"
	I0916 11:12:27.347130   40135 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 11:12:27.347135   40135 command_runner.go:130] > # default_ulimits = [
	I0916 11:12:27.347139   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347144   40135 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 11:12:27.347150   40135 command_runner.go:130] > # no_pivot = false
	I0916 11:12:27.347156   40135 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 11:12:27.347164   40135 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 11:12:27.347171   40135 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 11:12:27.347177   40135 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 11:12:27.347184   40135 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 11:12:27.347194   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:12:27.347200   40135 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 11:12:27.347205   40135 command_runner.go:130] > # Cgroup setting for conmon
	I0916 11:12:27.347214   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 11:12:27.347219   40135 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 11:12:27.347225   40135 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 11:12:27.347234   40135 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 11:12:27.347242   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:12:27.347247   40135 command_runner.go:130] > conmon_env = [
	I0916 11:12:27.347253   40135 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:12:27.347258   40135 command_runner.go:130] > ]
	I0916 11:12:27.347263   40135 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 11:12:27.347270   40135 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 11:12:27.347276   40135 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 11:12:27.347282   40135 command_runner.go:130] > # default_env = [
	I0916 11:12:27.347285   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347293   40135 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 11:12:27.347300   40135 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 11:12:27.347306   40135 command_runner.go:130] > # selinux = false
	I0916 11:12:27.347312   40135 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 11:12:27.347320   40135 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 11:12:27.347328   40135 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 11:12:27.347332   40135 command_runner.go:130] > # seccomp_profile = ""
	I0916 11:12:27.347340   40135 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 11:12:27.347345   40135 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 11:12:27.347353   40135 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 11:12:27.347358   40135 command_runner.go:130] > # which might increase security.
	I0916 11:12:27.347363   40135 command_runner.go:130] > # This option is currently deprecated,
	I0916 11:12:27.347370   40135 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 11:12:27.347375   40135 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 11:12:27.347383   40135 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 11:12:27.347391   40135 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 11:12:27.347399   40135 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 11:12:27.347407   40135 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 11:12:27.347414   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.347419   40135 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 11:12:27.347426   40135 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 11:12:27.347430   40135 command_runner.go:130] > # the cgroup blockio controller.
	I0916 11:12:27.347435   40135 command_runner.go:130] > # blockio_config_file = ""
	I0916 11:12:27.347441   40135 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 11:12:27.347446   40135 command_runner.go:130] > # blockio parameters.
	I0916 11:12:27.347450   40135 command_runner.go:130] > # blockio_reload = false
	I0916 11:12:27.347458   40135 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 11:12:27.347466   40135 command_runner.go:130] > # irqbalance daemon.
	I0916 11:12:27.347470   40135 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 11:12:27.347478   40135 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 11:12:27.347488   40135 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 11:12:27.347497   40135 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 11:12:27.347503   40135 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 11:12:27.347511   40135 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 11:12:27.347517   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.347523   40135 command_runner.go:130] > # rdt_config_file = ""
	I0916 11:12:27.347528   40135 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 11:12:27.347535   40135 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 11:12:27.347550   40135 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 11:12:27.347556   40135 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 11:12:27.347562   40135 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 11:12:27.347568   40135 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 11:12:27.347574   40135 command_runner.go:130] > # will be added.
	I0916 11:12:27.347578   40135 command_runner.go:130] > # default_capabilities = [
	I0916 11:12:27.347583   40135 command_runner.go:130] > # 	"CHOWN",
	I0916 11:12:27.347588   40135 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 11:12:27.347594   40135 command_runner.go:130] > # 	"FSETID",
	I0916 11:12:27.347597   40135 command_runner.go:130] > # 	"FOWNER",
	I0916 11:12:27.347603   40135 command_runner.go:130] > # 	"SETGID",
	I0916 11:12:27.347607   40135 command_runner.go:130] > # 	"SETUID",
	I0916 11:12:27.347613   40135 command_runner.go:130] > # 	"SETPCAP",
	I0916 11:12:27.347617   40135 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 11:12:27.347621   40135 command_runner.go:130] > # 	"KILL",
	I0916 11:12:27.347624   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347632   40135 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 11:12:27.347640   40135 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 11:12:27.347645   40135 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 11:12:27.347653   40135 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 11:12:27.347659   40135 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:12:27.347665   40135 command_runner.go:130] > default_sysctls = [
	I0916 11:12:27.347669   40135 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 11:12:27.347673   40135 command_runner.go:130] > ]
	I0916 11:12:27.347677   40135 command_runner.go:130] > # List of devices on the host that a
	I0916 11:12:27.347684   40135 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 11:12:27.347688   40135 command_runner.go:130] > # allowed_devices = [
	I0916 11:12:27.347694   40135 command_runner.go:130] > # 	"/dev/fuse",
	I0916 11:12:27.347697   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347705   40135 command_runner.go:130] > # List of additional devices. specified as
	I0916 11:12:27.347712   40135 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 11:12:27.347719   40135 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 11:12:27.347724   40135 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:12:27.347731   40135 command_runner.go:130] > # additional_devices = [
	I0916 11:12:27.347734   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347741   40135 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 11:12:27.347747   40135 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 11:12:27.347751   40135 command_runner.go:130] > # 	"/etc/cdi",
	I0916 11:12:27.347757   40135 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 11:12:27.347761   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347769   40135 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 11:12:27.347777   40135 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 11:12:27.347784   40135 command_runner.go:130] > # Defaults to false.
	I0916 11:12:27.347789   40135 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 11:12:27.347798   40135 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 11:12:27.347806   40135 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 11:12:27.347811   40135 command_runner.go:130] > # hooks_dir = [
	I0916 11:12:27.347816   40135 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 11:12:27.347821   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347827   40135 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 11:12:27.347835   40135 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 11:12:27.347840   40135 command_runner.go:130] > # its default mounts from the following two files:
	I0916 11:12:27.347843   40135 command_runner.go:130] > #
	I0916 11:12:27.347851   40135 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 11:12:27.347858   40135 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 11:12:27.347865   40135 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 11:12:27.347868   40135 command_runner.go:130] > #
	I0916 11:12:27.347881   40135 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 11:12:27.347887   40135 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 11:12:27.347895   40135 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 11:12:27.347902   40135 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 11:12:27.347905   40135 command_runner.go:130] > #
	I0916 11:12:27.347912   40135 command_runner.go:130] > # default_mounts_file = ""
	I0916 11:12:27.347917   40135 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 11:12:27.347925   40135 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 11:12:27.347931   40135 command_runner.go:130] > pids_limit = 1024
	I0916 11:12:27.347937   40135 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 11:12:27.347945   40135 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 11:12:27.347954   40135 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 11:12:27.347962   40135 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 11:12:27.347968   40135 command_runner.go:130] > # log_size_max = -1
	I0916 11:12:27.347975   40135 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 11:12:27.347981   40135 command_runner.go:130] > # log_to_journald = false
	I0916 11:12:27.347987   40135 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 11:12:27.347994   40135 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 11:12:27.347999   40135 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 11:12:27.348006   40135 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 11:12:27.348012   40135 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 11:12:27.348018   40135 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 11:12:27.348024   40135 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 11:12:27.348030   40135 command_runner.go:130] > # read_only = false
	I0916 11:12:27.348036   40135 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 11:12:27.348044   40135 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 11:12:27.348050   40135 command_runner.go:130] > # live configuration reload.
	I0916 11:12:27.348054   40135 command_runner.go:130] > # log_level = "info"
	I0916 11:12:27.348062   40135 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 11:12:27.348068   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.348073   40135 command_runner.go:130] > # log_filter = ""
	I0916 11:12:27.348079   40135 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 11:12:27.348087   40135 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 11:12:27.348093   40135 command_runner.go:130] > # separated by comma.
	I0916 11:12:27.348100   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348106   40135 command_runner.go:130] > # uid_mappings = ""
	I0916 11:12:27.348112   40135 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 11:12:27.348118   40135 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 11:12:27.348124   40135 command_runner.go:130] > # separated by comma.
	I0916 11:12:27.348132   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348138   40135 command_runner.go:130] > # gid_mappings = ""
	I0916 11:12:27.348144   40135 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 11:12:27.348152   40135 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:12:27.348158   40135 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:12:27.348168   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348175   40135 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 11:12:27.348181   40135 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 11:12:27.348189   40135 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:12:27.348197   40135 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:12:27.348204   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348210   40135 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 11:12:27.348216   40135 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 11:12:27.348224   40135 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 11:12:27.348230   40135 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 11:12:27.348237   40135 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 11:12:27.348243   40135 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 11:12:27.348250   40135 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 11:12:27.348257   40135 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 11:12:27.348262   40135 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 11:12:27.348268   40135 command_runner.go:130] > drop_infra_ctr = false
	I0916 11:12:27.348274   40135 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 11:12:27.348281   40135 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 11:12:27.348288   40135 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 11:12:27.348294   40135 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 11:12:27.348301   40135 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 11:12:27.348308   40135 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 11:12:27.348314   40135 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 11:12:27.348321   40135 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 11:12:27.348324   40135 command_runner.go:130] > # shared_cpuset = ""
	I0916 11:12:27.348330   40135 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 11:12:27.348336   40135 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 11:12:27.348341   40135 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 11:12:27.348349   40135 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 11:12:27.348354   40135 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 11:12:27.348359   40135 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 11:12:27.348368   40135 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 11:12:27.348371   40135 command_runner.go:130] > # enable_criu_support = false
	I0916 11:12:27.348377   40135 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 11:12:27.348385   40135 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 11:12:27.348389   40135 command_runner.go:130] > # enable_pod_events = false
	I0916 11:12:27.348397   40135 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:12:27.348405   40135 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:12:27.348410   40135 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 11:12:27.348416   40135 command_runner.go:130] > # default_runtime = "runc"
	I0916 11:12:27.348421   40135 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 11:12:27.348430   40135 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 11:12:27.348443   40135 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 11:12:27.348450   40135 command_runner.go:130] > # creation as a file is not desired either.
	I0916 11:12:27.348458   40135 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 11:12:27.348463   40135 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 11:12:27.348470   40135 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 11:12:27.348473   40135 command_runner.go:130] > # ]
	I0916 11:12:27.348487   40135 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 11:12:27.348493   40135 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 11:12:27.348501   40135 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 11:12:27.348508   40135 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 11:12:27.348511   40135 command_runner.go:130] > #
	I0916 11:12:27.348516   40135 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 11:12:27.348522   40135 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 11:12:27.348540   40135 command_runner.go:130] > # runtime_type = "oci"
	I0916 11:12:27.348546   40135 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 11:12:27.348551   40135 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 11:12:27.348557   40135 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 11:12:27.348562   40135 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 11:12:27.348568   40135 command_runner.go:130] > # monitor_env = []
	I0916 11:12:27.348573   40135 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 11:12:27.348579   40135 command_runner.go:130] > # allowed_annotations = []
	I0916 11:12:27.348584   40135 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 11:12:27.348590   40135 command_runner.go:130] > # Where:
	I0916 11:12:27.348595   40135 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 11:12:27.348603   40135 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 11:12:27.348612   40135 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 11:12:27.348618   40135 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 11:12:27.348623   40135 command_runner.go:130] > #   in $PATH.
	I0916 11:12:27.348629   40135 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 11:12:27.348636   40135 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 11:12:27.348642   40135 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 11:12:27.348647   40135 command_runner.go:130] > #   state.
	I0916 11:12:27.348654   40135 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 11:12:27.348662   40135 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 11:12:27.348670   40135 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 11:12:27.348676   40135 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 11:12:27.348682   40135 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 11:12:27.348690   40135 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 11:12:27.348696   40135 command_runner.go:130] > #   The currently recognized values are:
	I0916 11:12:27.348704   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 11:12:27.348713   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 11:12:27.348721   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 11:12:27.348727   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 11:12:27.348736   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 11:12:27.348744   40135 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 11:12:27.348751   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 11:12:27.348759   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 11:12:27.348766   40135 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 11:12:27.348774   40135 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 11:12:27.348781   40135 command_runner.go:130] > #   deprecated option "conmon".
	I0916 11:12:27.348788   40135 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 11:12:27.348795   40135 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 11:12:27.348801   40135 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 11:12:27.348808   40135 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 11:12:27.348814   40135 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 11:12:27.348820   40135 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 11:12:27.348827   40135 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 11:12:27.348834   40135 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 11:12:27.348837   40135 command_runner.go:130] > #
	I0916 11:12:27.348842   40135 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 11:12:27.348846   40135 command_runner.go:130] > #
	I0916 11:12:27.348852   40135 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 11:12:27.348859   40135 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 11:12:27.348865   40135 command_runner.go:130] > #
	I0916 11:12:27.348874   40135 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 11:12:27.348882   40135 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 11:12:27.348886   40135 command_runner.go:130] > #
	I0916 11:12:27.348894   40135 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 11:12:27.348898   40135 command_runner.go:130] > # feature.
	I0916 11:12:27.348902   40135 command_runner.go:130] > #
	I0916 11:12:27.348908   40135 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 11:12:27.348917   40135 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 11:12:27.348925   40135 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 11:12:27.348933   40135 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 11:12:27.348940   40135 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 11:12:27.348949   40135 command_runner.go:130] > #
	I0916 11:12:27.348956   40135 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 11:12:27.348964   40135 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 11:12:27.348967   40135 command_runner.go:130] > #
	I0916 11:12:27.348974   40135 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 11:12:27.348981   40135 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 11:12:27.348984   40135 command_runner.go:130] > #
	I0916 11:12:27.348992   40135 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 11:12:27.348998   40135 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 11:12:27.349003   40135 command_runner.go:130] > # limitation.
	I0916 11:12:27.349008   40135 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 11:12:27.349014   40135 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 11:12:27.349018   40135 command_runner.go:130] > runtime_type = "oci"
	I0916 11:12:27.349024   40135 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 11:12:27.349028   40135 command_runner.go:130] > runtime_config_path = ""
	I0916 11:12:27.349034   40135 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 11:12:27.349038   40135 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 11:12:27.349044   40135 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 11:12:27.349048   40135 command_runner.go:130] > monitor_env = [
	I0916 11:12:27.349056   40135 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:12:27.349059   40135 command_runner.go:130] > ]
	I0916 11:12:27.349064   40135 command_runner.go:130] > privileged_without_host_devices = false
	I0916 11:12:27.349084   40135 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 11:12:27.349094   40135 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 11:12:27.349101   40135 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 11:12:27.349110   40135 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 11:12:27.349120   40135 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 11:12:27.349140   40135 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 11:12:27.349157   40135 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 11:12:27.349169   40135 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 11:12:27.349177   40135 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 11:12:27.349187   40135 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 11:12:27.349192   40135 command_runner.go:130] > # Example:
	I0916 11:12:27.349198   40135 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 11:12:27.349204   40135 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 11:12:27.349209   40135 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 11:12:27.349216   40135 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 11:12:27.349220   40135 command_runner.go:130] > # cpuset = 0
	I0916 11:12:27.349224   40135 command_runner.go:130] > # cpushares = "0-1"
	I0916 11:12:27.349229   40135 command_runner.go:130] > # Where:
	I0916 11:12:27.349234   40135 command_runner.go:130] > # The workload name is workload-type.
	I0916 11:12:27.349242   40135 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 11:12:27.349250   40135 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 11:12:27.349255   40135 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 11:12:27.349265   40135 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 11:12:27.349272   40135 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 11:12:27.349279   40135 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 11:12:27.349286   40135 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 11:12:27.349292   40135 command_runner.go:130] > # Default value is set to true
	I0916 11:12:27.349296   40135 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 11:12:27.349303   40135 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 11:12:27.349308   40135 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 11:12:27.349314   40135 command_runner.go:130] > # Default value is set to 'false'
	I0916 11:12:27.349318   40135 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 11:12:27.349324   40135 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 11:12:27.349330   40135 command_runner.go:130] > #
	I0916 11:12:27.349336   40135 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 11:12:27.349342   40135 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 11:12:27.349348   40135 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 11:12:27.349354   40135 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 11:12:27.349359   40135 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 11:12:27.349363   40135 command_runner.go:130] > [crio.image]
	I0916 11:12:27.349368   40135 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 11:12:27.349372   40135 command_runner.go:130] > # default_transport = "docker://"
	I0916 11:12:27.349378   40135 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 11:12:27.349384   40135 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:12:27.349387   40135 command_runner.go:130] > # global_auth_file = ""
	I0916 11:12:27.349392   40135 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 11:12:27.349396   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.349400   40135 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 11:12:27.349406   40135 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 11:12:27.349411   40135 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:12:27.349415   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.349419   40135 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 11:12:27.349424   40135 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 11:12:27.349430   40135 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 11:12:27.349435   40135 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 11:12:27.349441   40135 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 11:12:27.349445   40135 command_runner.go:130] > # pause_command = "/pause"
	I0916 11:12:27.349450   40135 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 11:12:27.349456   40135 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 11:12:27.349461   40135 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 11:12:27.349468   40135 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 11:12:27.349476   40135 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 11:12:27.349482   40135 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 11:12:27.349488   40135 command_runner.go:130] > # pinned_images = [
	I0916 11:12:27.349491   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349498   40135 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 11:12:27.349506   40135 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 11:12:27.349513   40135 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 11:12:27.349525   40135 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 11:12:27.349533   40135 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 11:12:27.349539   40135 command_runner.go:130] > # signature_policy = ""
	I0916 11:12:27.349544   40135 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 11:12:27.349553   40135 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 11:12:27.349561   40135 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 11:12:27.349567   40135 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 11:12:27.349575   40135 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 11:12:27.349579   40135 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 11:12:27.349587   40135 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 11:12:27.349595   40135 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 11:12:27.349599   40135 command_runner.go:130] > # changing them here.
	I0916 11:12:27.349610   40135 command_runner.go:130] > # insecure_registries = [
	I0916 11:12:27.349613   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349620   40135 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 11:12:27.349626   40135 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 11:12:27.349630   40135 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 11:12:27.349635   40135 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 11:12:27.349642   40135 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 11:12:27.349648   40135 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 11:12:27.349653   40135 command_runner.go:130] > # CNI plugins.
	I0916 11:12:27.349657   40135 command_runner.go:130] > [crio.network]
	I0916 11:12:27.349663   40135 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 11:12:27.349670   40135 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 11:12:27.349674   40135 command_runner.go:130] > # cni_default_network = ""
	I0916 11:12:27.349688   40135 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 11:12:27.349692   40135 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 11:12:27.349700   40135 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 11:12:27.349706   40135 command_runner.go:130] > # plugin_dirs = [
	I0916 11:12:27.349710   40135 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 11:12:27.349716   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349721   40135 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 11:12:27.349727   40135 command_runner.go:130] > [crio.metrics]
	I0916 11:12:27.349732   40135 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 11:12:27.349739   40135 command_runner.go:130] > enable_metrics = true
	I0916 11:12:27.349743   40135 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 11:12:27.349751   40135 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 11:12:27.349757   40135 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 11:12:27.349765   40135 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 11:12:27.349772   40135 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 11:12:27.349777   40135 command_runner.go:130] > # metrics_collectors = [
	I0916 11:12:27.349782   40135 command_runner.go:130] > # 	"operations",
	I0916 11:12:27.349787   40135 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 11:12:27.349793   40135 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 11:12:27.349798   40135 command_runner.go:130] > # 	"operations_errors",
	I0916 11:12:27.349804   40135 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 11:12:27.349808   40135 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 11:12:27.349814   40135 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 11:12:27.349818   40135 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 11:12:27.349824   40135 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 11:12:27.349828   40135 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 11:12:27.349835   40135 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 11:12:27.349839   40135 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 11:12:27.349845   40135 command_runner.go:130] > # 	"containers_oom_total",
	I0916 11:12:27.349850   40135 command_runner.go:130] > # 	"containers_oom",
	I0916 11:12:27.349856   40135 command_runner.go:130] > # 	"processes_defunct",
	I0916 11:12:27.349860   40135 command_runner.go:130] > # 	"operations_total",
	I0916 11:12:27.349867   40135 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 11:12:27.349875   40135 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 11:12:27.349882   40135 command_runner.go:130] > # 	"operations_errors_total",
	I0916 11:12:27.349886   40135 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 11:12:27.349892   40135 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 11:12:27.349897   40135 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 11:12:27.349903   40135 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 11:12:27.349907   40135 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 11:12:27.349914   40135 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 11:12:27.349919   40135 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 11:12:27.349925   40135 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 11:12:27.349928   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349934   40135 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 11:12:27.349939   40135 command_runner.go:130] > # metrics_port = 9090
	I0916 11:12:27.349944   40135 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 11:12:27.349950   40135 command_runner.go:130] > # metrics_socket = ""
	I0916 11:12:27.349954   40135 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 11:12:27.349962   40135 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 11:12:27.349971   40135 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 11:12:27.349977   40135 command_runner.go:130] > # certificate on any modification event.
	I0916 11:12:27.349981   40135 command_runner.go:130] > # metrics_cert = ""
	I0916 11:12:27.349988   40135 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 11:12:27.349994   40135 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 11:12:27.349999   40135 command_runner.go:130] > # metrics_key = ""
	I0916 11:12:27.350005   40135 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 11:12:27.350010   40135 command_runner.go:130] > [crio.tracing]
	I0916 11:12:27.350016   40135 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 11:12:27.350029   40135 command_runner.go:130] > # enable_tracing = false
	I0916 11:12:27.350034   40135 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 11:12:27.350041   40135 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 11:12:27.350048   40135 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 11:12:27.350054   40135 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 11:12:27.350058   40135 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 11:12:27.350064   40135 command_runner.go:130] > [crio.nri]
	I0916 11:12:27.350068   40135 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 11:12:27.350074   40135 command_runner.go:130] > # enable_nri = false
	I0916 11:12:27.350079   40135 command_runner.go:130] > # NRI socket to listen on.
	I0916 11:12:27.350085   40135 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 11:12:27.350090   40135 command_runner.go:130] > # NRI plugin directory to use.
	I0916 11:12:27.350096   40135 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 11:12:27.350101   40135 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 11:12:27.350108   40135 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 11:12:27.350114   40135 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 11:12:27.350120   40135 command_runner.go:130] > # nri_disable_connections = false
	I0916 11:12:27.350126   40135 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 11:12:27.350132   40135 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 11:12:27.350137   40135 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 11:12:27.350144   40135 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 11:12:27.350150   40135 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 11:12:27.350155   40135 command_runner.go:130] > [crio.stats]
	I0916 11:12:27.350161   40135 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 11:12:27.350168   40135 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 11:12:27.350172   40135 command_runner.go:130] > # stats_collection_period = 0
	I0916 11:12:27.350235   40135 cni.go:84] Creating CNI manager for ""
	I0916 11:12:27.350246   40135 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 11:12:27.350255   40135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:27.350273   40135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-736061 NodeName:multinode-736061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:27.350419   40135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-736061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:27.350474   40135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:27.361566   40135 command_runner.go:130] > kubeadm
	I0916 11:12:27.361580   40135 command_runner.go:130] > kubectl
	I0916 11:12:27.361584   40135 command_runner.go:130] > kubelet
	I0916 11:12:27.361736   40135 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:27.361782   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:27.372014   40135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:12:27.391186   40135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:27.408090   40135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 11:12:27.425238   40135 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:27.429573   40135 command_runner.go:130] > 192.168.39.32	control-plane.minikube.internal
	I0916 11:12:27.429655   40135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:27.566945   40135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:27.581910   40135 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.32
	I0916 11:12:27.581936   40135 certs.go:194] generating shared ca certs ...
	I0916 11:12:27.581957   40135 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:27.582115   40135 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:12:27.582167   40135 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:12:27.582177   40135 certs.go:256] generating profile certs ...
	I0916 11:12:27.582249   40135 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key
	I0916 11:12:27.582305   40135 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7
	I0916 11:12:27.582343   40135 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key
	I0916 11:12:27.582354   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:12:27.582365   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:12:27.582378   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:12:27.582390   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:12:27.582400   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:12:27.582410   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:12:27.582423   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:12:27.582436   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:12:27.582483   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:12:27.582509   40135 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:27.582518   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:27.582550   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:12:27.582574   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:27.582595   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:12:27.582631   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:12:27.582655   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.582667   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.582679   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.583263   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:27.609531   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:27.634944   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:27.660493   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:27.685235   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:12:27.708765   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:12:27.733626   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:27.757830   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:12:27.782527   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:27.806733   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:12:27.831538   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:12:27.856224   40135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:27.873368   40135 ssh_runner.go:195] Run: openssl version
	I0916 11:12:27.879163   40135 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:12:27.879396   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:12:27.890038   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894595   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894654   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894716   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.919619   40135 command_runner.go:130] > 51391683
	I0916 11:12:27.920420   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:27.932003   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:12:27.943754   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948079   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948103   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948147   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.953662   40135 command_runner.go:130] > 3ec20f2e
	I0916 11:12:27.953740   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:27.963952   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:27.975088   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979448   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979467   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979508   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.984970   40135 command_runner.go:130] > b5213941
	I0916 11:12:27.985201   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:27.995006   40135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:27.999529   40135 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:27.999557   40135 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 11:12:27.999566   40135 command_runner.go:130] > Device: 253,1	Inode: 2101800     Links: 1
	I0916 11:12:27.999605   40135 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:12:27.999620   40135 command_runner.go:130] > Access: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999631   40135 command_runner.go:130] > Modify: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999639   40135 command_runner.go:130] > Change: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999648   40135 command_runner.go:130] >  Birth: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999698   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:12:28.005429   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.005492   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:12:28.010927   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.011069   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:12:28.016675   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.016733   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:12:28.022268   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.022386   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:12:28.027951   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.028023   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:12:28.033400   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.033473   40135 kubeadm.go:392] StartCluster: {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:28.033571   40135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:28.033610   40135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:28.072849   40135 command_runner.go:130] > 840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd
	I0916 11:12:28.072892   40135 command_runner.go:130] > 02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198
	I0916 11:12:28.072902   40135 command_runner.go:130] > 7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0
	I0916 11:12:28.072914   40135 command_runner.go:130] > f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee
	I0916 11:12:28.072924   40135 command_runner.go:130] > b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762
	I0916 11:12:28.072933   40135 command_runner.go:130] > 769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24
	I0916 11:12:28.072942   40135 command_runner.go:130] > d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba
	I0916 11:12:28.072951   40135 command_runner.go:130] > ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7
	I0916 11:12:28.072976   40135 cri.go:89] found id: "840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd"
	I0916 11:12:28.072988   40135 cri.go:89] found id: "02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198"
	I0916 11:12:28.072993   40135 cri.go:89] found id: "7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0"
	I0916 11:12:28.072998   40135 cri.go:89] found id: "f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee"
	I0916 11:12:28.073002   40135 cri.go:89] found id: "b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762"
	I0916 11:12:28.073007   40135 cri.go:89] found id: "769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24"
	I0916 11:12:28.073010   40135 cri.go:89] found id: "d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba"
	I0916 11:12:28.073014   40135 cri.go:89] found id: "ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7"
	I0916 11:12:28.073018   40135 cri.go:89] found id: ""
	I0916 11:12:28.073069   40135 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 11:14:08 multinode-736061 crio[2989]: time="2024-09-16 11:14:08.965984498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485248965944433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7a9da4a-238b-46ef-a890-f196c38289c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:08 multinode-736061 crio[2989]: time="2024-09-16 11:14:08.967032464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e1d181a-a494-4a56-bb2a-3961a46d8e1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:08 multinode-736061 crio[2989]: time="2024-09-16 11:14:08.967203879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e1d181a-a494-4a56-bb2a-3961a46d8e1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:08 multinode-736061 crio[2989]: time="2024-09-16 11:14:08.968124796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e1d181a-a494-4a56-bb2a-3961a46d8e1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.018112956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83430e6e-7306-4213-b599-24c977265908 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.018396008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83430e6e-7306-4213-b599-24c977265908 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.019763151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=500bae25-9780-4a9d-a6be-d4afa0b8466d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.020135046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249020115773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=500bae25-9780-4a9d-a6be-d4afa0b8466d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.021087819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9228c9c2-db7f-4c07-b692-ec3bfa6601d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.021193191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9228c9c2-db7f-4c07-b692-ec3bfa6601d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.021651716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9228c9c2-db7f-4c07-b692-ec3bfa6601d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.065446214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f920dfb-119f-4c7a-ac0f-58ebb2d6149f name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.065584932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f920dfb-119f-4c7a-ac0f-58ebb2d6149f name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.067067620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f2bb12a-90eb-4563-aace-6cc5a1202ca1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.067649264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249067623273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f2bb12a-90eb-4563-aace-6cc5a1202ca1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.068134199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df6fa4d2-1b8f-4d7b-9dc8-3490e37efc6e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.068213786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df6fa4d2-1b8f-4d7b-9dc8-3490e37efc6e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.068614306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df6fa4d2-1b8f-4d7b-9dc8-3490e37efc6e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.110587184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e35def9b-98e2-4542-96bf-d70b65343c58 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.110684857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e35def9b-98e2-4542-96bf-d70b65343c58 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.112391872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1006d7fe-fdc6-4718-9c58-9e48e119481f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.112841345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249112812164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1006d7fe-fdc6-4718-9c58-9e48e119481f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.115676673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95884dce-6e85-4e88-b35e-2b308715fb93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.115924713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95884dce-6e85-4e88-b35e-2b308715fb93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:09 multinode-736061 crio[2989]: time="2024-09-16 11:14:09.116692916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95884dce-6e85-4e88-b35e-2b308715fb93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	522d3b85a4548       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c27596adc9769       busybox-7dff88458-g9fqk
	34160c655e5ab       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   d6609b6804e21       kindnet-qb4tq
	35a7839cd57d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   78066c652dd8f       coredns-7c65d6cfc9-nlhl2
	87a99d0015cbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b06a4343bbdd3       storage-provisioner
	2d81e17eebccf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   fcfacdd69a46c       kube-proxy-ftj9p
	2e7284c90c8c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   d9afb21537018       kube-scheduler-multinode-736061
	ae1251600e6e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   cd4168d0828d2       etcd-multinode-736061
	8fa850b5495ff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   f4286a53710f2       kube-apiserver-multinode-736061
	126fd7058d64d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   113acd43d732e       kube-controller-manager-multinode-736061
	84517e6af45b4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   779060032a611       busybox-7dff88458-g9fqk
	840a587a0926e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago        Exited              coredns                   0                   19286465f900a       coredns-7c65d6cfc9-nlhl2
	02223ab182498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   01381d4d113d1       storage-provisioner
	7a89ff755837a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   bd141ffff1a91       kindnet-qb4tq
	f8c55edbe2173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   cc5264d1c4b52       kube-proxy-ftj9p
	b76d5d4ad419a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   f771edf6fcef2       kube-scheduler-multinode-736061
	769a75ad1934a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   6237db42cfa9d       etcd-multinode-736061
	d53f9aec7bc35       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   c1754b1d74547       kube-controller-manager-multinode-736061
	ed73e9089f633       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   06f23871be821       kube-apiserver-multinode-736061
	
	
	==> coredns [35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40656 - 6477 "HINFO IN 2586289926805624417.1154026984614338138. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767921s
	
	
	==> coredns [840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd] <==
	[INFO] 10.244.0.3:48472 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859185s
	[INFO] 10.244.0.3:58999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160969s
	[INFO] 10.244.0.3:35408 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007258s
	[INFO] 10.244.0.3:41914 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221958s
	[INFO] 10.244.0.3:51441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075035s
	[INFO] 10.244.0.3:54367 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064081s
	[INFO] 10.244.0.3:51073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061874s
	[INFO] 10.244.1.2:38827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130826s
	[INFO] 10.244.1.2:49788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142283s
	[INFO] 10.244.1.2:43407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083078s
	[INFO] 10.244.1.2:35506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123825s
	[INFO] 10.244.0.3:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008958s
	[INFO] 10.244.0.3:44801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055108s
	[INFO] 10.244.0.3:45405 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039898s
	[INFO] 10.244.0.3:53790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037364s
	[INFO] 10.244.1.2:44863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136337s
	[INFO] 10.244.1.2:38345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000494388s
	[INFO] 10.244.1.2:36190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000247796s
	[INFO] 10.244.1.2:38755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120111s
	[INFO] 10.244.0.3:58238 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129373s
	[INFO] 10.244.0.3:55519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102337s
	[INFO] 10.244.0.3:60945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061359s
	[INFO] 10.244.0.3:52747 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010905s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-736061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-736061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60fe80618d4f42e281d4c50393e9d89e
	  System UUID:                60fe8061-8d4f-42e2-81d4-c50393e9d89e
	  Boot ID:                    d046d280-229f-4e9a-8a6c-1986374da911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g9fqk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 coredns-7c65d6cfc9-nlhl2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m10s
	  kube-system                 etcd-multinode-736061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m16s
	  kube-system                 kindnet-qb4tq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m11s
	  kube-system                 kube-apiserver-multinode-736061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-controller-manager-multinode-736061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-proxy-ftj9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-scheduler-multinode-736061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m9s                   kube-proxy       
	  Normal  Starting                 94s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x7 over 8m22s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s                  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m11s                  node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  NodeReady                7m58s                  kubelet          Node multinode-736061 status is now: NodeReady
	  Normal  Starting                 100s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  100s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s (x8 over 100s)     kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 100s)     kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 100s)     kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                    node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	
	
	Name:               multinode-736061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_13_11_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-736061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fe337504134150bccd557919449b29
	  System UUID:                d4fe3375-0413-4150-bccd-557919449b29
	  Boot ID:                    d98e6a6c-e943-4dd6-9c7a-051fe2e4235b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7dvrx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kindnet-xlrxb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m27s
	  kube-system                 kube-proxy-8h6jp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m21s                  kube-proxy  
	  Normal  Starting                 54s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m27s (x2 over 7m27s)  kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x2 over 7m27s)  kubelet     Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x2 over 7m27s)  kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-736061-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-736061-m02 status is now: NodeReady
	
	
	Name:               multinode-736061-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_13_48_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:14:06 +0000   Mon, 16 Sep 2024 11:13:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:14:06 +0000   Mon, 16 Sep 2024 11:13:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:14:06 +0000   Mon, 16 Sep 2024 11:13:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:14:06 +0000   Mon, 16 Sep 2024 11:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-736061-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 890f5eb3683144b2b6dc0b58be15768f
	  System UUID:                890f5eb3-6831-44b2-b6dc-0b58be15768f
	  Boot ID:                    74e7c915-ff81-420b-8786-373d5c367efe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bvqrg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m33s
	  kube-system                 kube-proxy-5hctk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From           Message
	  ----    ------                   ----                   ----           -------
	  Normal  Starting                 5m36s                  kube-proxy     
	  Normal  Starting                 6m28s                  kube-proxy     
	  Normal  Starting                 16s                    kube-proxy     
	  Normal  NodeHasSufficientMemory  6m33s (x2 over 6m34s)  kubelet        Node multinode-736061-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x2 over 6m34s)  kubelet        Node multinode-736061-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x2 over 6m34s)  kubelet        Node multinode-736061-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet        Node multinode-736061-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m41s (x2 over 5m41s)  kubelet        Node multinode-736061-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x2 over 5m41s)  kubelet        Node multinode-736061-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m41s (x2 over 5m41s)  kubelet        Node multinode-736061-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m22s                  kubelet        Node multinode-736061-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     22s                    cidrAllocator  Node multinode-736061-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet        Node multinode-736061-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet        Node multinode-736061-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet        Node multinode-736061-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet        Node multinode-736061-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.065798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064029] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.188943] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.125437] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281577] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.899790] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.897000] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059824] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.997335] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.078309] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.139976] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.076513] kauditd_printk_skb: 18 callbacks suppressed
	[Sep16 11:06] kauditd_printk_skb: 69 callbacks suppressed
	[Sep16 11:07] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 11:12] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +0.148062] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +0.171344] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +0.138643] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.279343] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +0.718595] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +2.178122] systemd-fstab-generator[3193]: Ignoring "noauto" option for root device
	[  +4.699068] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.680556] systemd-fstab-generator[4044]: Ignoring "noauto" option for root device
	[  +0.106179] kauditd_printk_skb: 34 callbacks suppressed
	[Sep16 11:13] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24] <==
	{"level":"info","ts":"2024-09-16T11:05:49.392766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.393463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:06:03.777149Z","caller":"traceutil/trace.go:171","msg":"trace[927915415] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"125.996547ms","start":"2024-09-16T11:06:03.651108Z","end":"2024-09-16T11:06:03.777104Z","steps":["trace[927915415] 'process raft request'  (duration: 125.663993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:06:42.434928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.290318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316539574759162275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" value_size:642 lease:7316539574759161296 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T11:06:42.435173Z","caller":"traceutil/trace.go:171","msg":"trace[736335181] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"242.745028ms","start":"2024-09-16T11:06:42.192402Z","end":"2024-09-16T11:06:42.435147Z","steps":["trace[736335181] 'process raft request'  (duration: 86.752839ms)","trace[736335181] 'compare'  (duration: 155.030741ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:06:42.435488Z","caller":"traceutil/trace.go:171","msg":"trace[1491776336] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"164.53116ms","start":"2024-09-16T11:06:42.270945Z","end":"2024-09-16T11:06:42.435476Z","steps":["trace[1491776336] 'process raft request'  (duration: 164.128437ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:36.191017Z","caller":"traceutil/trace.go:171","msg":"trace[1370350330] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"135.211812ms","start":"2024-09-16T11:07:36.055773Z","end":"2024-09-16T11:07:36.190985Z","steps":["trace[1370350330] 'read index received'  (duration: 127.332155ms)","trace[1370350330] 'applied index is now lower than readState.Index'  (duration: 7.878564ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:36.191190Z","caller":"traceutil/trace.go:171","msg":"trace[1606896706] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"230.440734ms","start":"2024-09-16T11:07:35.960732Z","end":"2024-09-16T11:07:36.191172Z","steps":["trace[1606896706] 'process raft request'  (duration: 222.394697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:07:36.191504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.712787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T11:07:36.191575Z","caller":"traceutil/trace.go:171","msg":"trace[641878152] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:0; response_revision:598; }","duration":"135.807158ms","start":"2024-09-16T11:07:36.055751Z","end":"2024-09-16T11:07:36.191558Z","steps":["trace[641878152] 'agreement among raft nodes before linearized reading'  (duration: 135.656463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:43.320131Z","caller":"traceutil/trace.go:171","msg":"trace[1026367264] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:677; }","duration":"256.510329ms","start":"2024-09-16T11:07:43.063604Z","end":"2024-09-16T11:07:43.320115Z","steps":["trace[1026367264] 'read index received'  (duration: 208.747621ms)","trace[1026367264] 'applied index is now lower than readState.Index'  (duration: 47.76201ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:43.320580Z","caller":"traceutil/trace.go:171","msg":"trace[845413732] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"283.063625ms","start":"2024-09-16T11:07:43.037497Z","end":"2024-09-16T11:07:43.320560Z","steps":["trace[845413732] 'process raft request'  (duration: 234.904981ms)","trace[845413732] 'compare'  (duration: 47.473062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:07:43.320947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.339861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-16T11:07:43.321022Z","caller":"traceutil/trace.go:171","msg":"trace[1372162398] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:1; response_revision:640; }","duration":"257.429414ms","start":"2024-09-16T11:07:43.063585Z","end":"2024-09-16T11:07:43.321014Z","steps":["trace[1372162398] 'agreement among raft nodes before linearized reading'  (duration: 257.097073ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:32.848686Z","caller":"traceutil/trace.go:171","msg":"trace[1433849770] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"176.13666ms","start":"2024-09-16T11:08:32.672526Z","end":"2024-09-16T11:08:32.848663Z","steps":["trace[1433849770] 'process raft request'  (duration: 175.720453ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:10:54.687328Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T11:10:54.687457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	{"level":"warn","ts":"2024-09-16T11:10:54.687629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.687676Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.689450Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.689531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T11:10:54.770633Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4c05646b7156589","current-leader-member-id":"d4c05646b7156589"}
	{"level":"info","ts":"2024-09-16T11:10:54.773137Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:10:54.773277Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:10:54.773343Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	
	
	==> etcd [ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526] <==
	{"level":"info","ts":"2024-09-16T11:12:31.076410Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-09-16T11:12:31.076610Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:31.076674Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:31.083484Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:31.096736Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:31.097022Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:31.097067Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:31.097111Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:12:31.097134Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:12:32.130362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.136512Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-736061 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:32.136525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.136756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.137155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137197Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.138897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-16T11:12:32.139181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:14:09 up 8 min,  0 users,  load average: 0.67, 0.50, 0.25
	Linux multinode-736061 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25] <==
	I0916 11:13:25.685796       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:13:35.682408       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:13:35.682574       1 main.go:299] handling current node
	I0916 11:13:35.682632       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:13:35.682657       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:13:35.683172       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:13:35.683223       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:13:45.685747       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:13:45.685810       1 main.go:299] handling current node
	I0916 11:13:45.685832       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:13:45.685842       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:13:45.686196       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:13:45.686237       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:13:55.681649       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:13:55.681784       1 main.go:299] handling current node
	I0916 11:13:55.681816       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:13:55.681835       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:13:55.681969       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:13:55.681991       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	I0916 11:14:05.688020       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:14:05.688131       1 main.go:299] handling current node
	I0916 11:14:05.688166       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:14:05.688184       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:14:05.688461       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:14:05.688502       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0] <==
	I0916 11:10:10.885622       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:20.882088       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:20.882177       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:20.882351       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:20.882379       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:20.882438       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:20.882445       1 main.go:299] handling current node
	I0916 11:10:30.882343       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:30.882485       1 main.go:299] handling current node
	I0916 11:10:30.882519       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:30.882538       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:30.882705       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:30.882730       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:40.881843       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:40.881966       1 main.go:299] handling current node
	I0916 11:10:40.881993       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:40.882011       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:40.882162       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:40.882241       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:50.885456       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:50.885505       1 main.go:299] handling current node
	I0916 11:10:50.885524       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:50.885530       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:50.885705       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:50.885712       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d] <==
	I0916 11:12:33.498192       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:12:33.501874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:12:33.508959       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:12:33.509043       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:12:33.509776       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:33.509828       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:33.509857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:33.546526       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:12:33.568509       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:33.568599       1 policy_source.go:224] refreshing policies
	I0916 11:12:33.589155       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:12:33.590889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:12:33.590927       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:12:33.591376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:12:33.596733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:12:33.620595       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:33.621748       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:12:34.423228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:35.891543       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:12:36.022725       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:36.049167       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:36.129506       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:36.139653       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:37.024276       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:37.124173       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7] <==
	W0916 11:10:54.717805       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 11:10:54.721617       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0916 11:10:54.721803       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W0916 11:10:54.722189       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0916 11:10:54.722608       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0916 11:10:54.722692       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0916 11:10:54.722807       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0916 11:10:54.722839       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0916 11:10:54.722854       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0916 11:10:54.722888       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0916 11:10:54.722907       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0916 11:10:54.722935       1 establishing_controller.go:92] Shutting down EstablishingController
	I0916 11:10:54.722948       1 naming_controller.go:305] Shutting down NamingConditionController
	I0916 11:10:54.722980       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0916 11:10:54.722994       1 controller.go:170] Shutting down OpenAPI controller
	I0916 11:10:54.723024       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0916 11:10:54.723033       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0916 11:10:54.723049       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0916 11:10:54.723078       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0916 11:10:54.723096       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0916 11:10:54.723124       1 controller.go:132] Ending legacy_token_tracking_controller
	I0916 11:10:54.723131       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0916 11:10:54.723263       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0916 11:10:54.723385       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0916 11:10:54.723607       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4] <==
	I0916 11:13:30.794521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.415627ms"
	I0916 11:13:30.794689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.593µs"
	I0916 11:13:31.852505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:13:40.787407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:13:46.690796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:46.714603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:46.929467       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:13:46.930549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.907188       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:13:47.909491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:13:47.928347       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.2.0/24"]
	I0916 11:13:47.928434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	E0916 11:13:47.943698       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.3.0/24"]
	E0916 11:13:47.943787       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03"
	E0916 11:13:47.943838       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-736061-m03': failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 11:13:47.943877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.949840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.952982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:48.292993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:51.924112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:58.208795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:06.246940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.870268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-controller-manager [d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba] <==
	I0916 11:08:27.068836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:27.299944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:27.299986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.498604       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:08:28.499795       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:28.530214       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.4.0/24"]
	I0916 11:08:28.530257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.530321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.812678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:29.131881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:33.111007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:38.696548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:47.211278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:48.081832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:28.097328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:28.097948       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m03"
	I0916 11:09:28.128518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:28.176986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.051461ms"
	I0916 11:09:28.177686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.301µs"
	I0916 11:09:33.174860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:33.196257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:33.196479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:43.270263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-proxy [2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:12:34.892799       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:12:34.920138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:12:34.920279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:12:34.987651       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:12:34.987713       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:12:34.987739       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:12:34.996924       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:12:34.997221       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:12:34.997234       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:35.007220       1 config.go:199] "Starting service config controller"
	I0916 11:12:35.029098       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:12:35.025409       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:12:35.029156       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:12:35.029162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:12:35.026457       1 config.go:328] "Starting node config controller"
	I0916 11:12:35.029234       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:12:35.130341       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:12:35.130407       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:05:59.852422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:05:59.886836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:05:59.886976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:05:59.944125       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:05:59.944160       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:05:59.944181       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:05:59.947733       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:05:59.948149       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:05:59.948393       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:05:59.949794       1 config.go:199] "Starting service config controller"
	I0916 11:05:59.949862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:05:59.950230       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:05:59.950374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:05:59.950923       1 config.go:328] "Starting node config controller"
	I0916 11:05:59.952219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:06:00.050768       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:06:00.050862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:06:00.052567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d] <==
	I0916 11:12:31.748594       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:12:33.440575       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:12:33.440623       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:12:33.440633       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:12:33.440641       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:12:33.526991       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:12:33.527040       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:33.536502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:12:33.536670       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:12:33.540976       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:12:33.544844       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:12:33.638485       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762] <==
	E0916 11:05:52.226438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.286013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:05:52.286065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.292630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:05:52.292712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.303069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:05:52.303177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.308000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:05:52.308078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.326647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.326746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.367616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:05:52.367800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.407350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:05:52.407398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.423030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:05:52.423081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.501395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.501587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.597443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.597573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.652519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:05:52.652625       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:05:55.090829       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 11:10:54.693272       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 11:12:39 multinode-736061 kubelet[3200]: E0916 11:12:39.954487    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485159953757144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:39 multinode-736061 kubelet[3200]: E0916 11:12:39.954768    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485159953757144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:49 multinode-736061 kubelet[3200]: E0916 11:12:49.957520    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485169957064652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:49 multinode-736061 kubelet[3200]: E0916 11:12:49.957552    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485169957064652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:59 multinode-736061 kubelet[3200]: E0916 11:12:59.962896    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485179961170320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:59 multinode-736061 kubelet[3200]: E0916 11:12:59.962920    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485179961170320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:09 multinode-736061 kubelet[3200]: E0916 11:13:09.964855    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485189964245911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:09 multinode-736061 kubelet[3200]: E0916 11:13:09.965529    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485189964245911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:19 multinode-736061 kubelet[3200]: E0916 11:13:19.970568    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485199969707731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:19 multinode-736061 kubelet[3200]: E0916 11:13:19.970611    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485199969707731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:29 multinode-736061 kubelet[3200]: E0916 11:13:29.921439    3200 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:13:29 multinode-736061 kubelet[3200]: E0916 11:13:29.972711    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485209972226101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:29 multinode-736061 kubelet[3200]: E0916 11:13:29.972898    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485209972226101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:39 multinode-736061 kubelet[3200]: E0916 11:13:39.976917    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485219975946051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:39 multinode-736061 kubelet[3200]: E0916 11:13:39.977478    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485219975946051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:49 multinode-736061 kubelet[3200]: E0916 11:13:49.980692    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485229980248757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:49 multinode-736061 kubelet[3200]: E0916 11:13:49.980723    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485229980248757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:59 multinode-736061 kubelet[3200]: E0916 11:13:59.982354    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485239981881362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:59 multinode-736061 kubelet[3200]: E0916 11:13:59.982789    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485239981881362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:14:09 multinode-736061 kubelet[3200]: E0916 11:14:09.986438    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249985987622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:14:09 multinode-736061 kubelet[3200]: E0916 11:14:09.986463    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249985987622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:14:08.658460   41235 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-736061 -n multinode-736061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (468.353µs)
helpers_test.go:263: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (318.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 node delete m03: (1.578230852s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:436: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (509.032µs)
multinode_test.go:438: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-736061 -n multinode-736061
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 logs -n 25: (1.485291108s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m02_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m03 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp testdata/cp-test.txt                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m03_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02:/home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m02 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-736061 node stop m03                                                          | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| node    | multinode-736061 node start                                                             | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| stop    | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| start   | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC |                     |
	| node    | multinode-736061 node delete                                                            | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:10:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:10:53.764405   40135 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:10:53.764697   40135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:53.764708   40135 out.go:358] Setting ErrFile to fd 2...
	I0916 11:10:53.764714   40135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:53.764934   40135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:10:53.765527   40135 out.go:352] Setting JSON to false
	I0916 11:10:53.766415   40135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3204,"bootTime":1726481850,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:10:53.766501   40135 start.go:139] virtualization: kvm guest
	I0916 11:10:53.768975   40135 out.go:177] * [multinode-736061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:10:53.770599   40135 notify.go:220] Checking for updates...
	I0916 11:10:53.770619   40135 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:10:53.772102   40135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:10:53.773841   40135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:10:53.775207   40135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:10:53.776414   40135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:10:53.777635   40135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:10:53.779515   40135 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:10:53.779637   40135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:10:53.780265   40135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:10:53.780320   40135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:53.800988   40135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
	I0916 11:10:53.801446   40135 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:53.801971   40135 main.go:141] libmachine: Using API Version  1
	I0916 11:10:53.801999   40135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:53.802338   40135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:53.802498   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.837831   40135 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 11:10:53.839032   40135 start.go:297] selected driver: kvm2
	I0916 11:10:53.839047   40135 start.go:901] validating driver "kvm2" against &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:53.839202   40135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:10:53.839496   40135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:53.839555   40135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:10:53.854668   40135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:10:53.855622   40135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:53.855664   40135 cni.go:84] Creating CNI manager for ""
	I0916 11:10:53.855731   40135 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 11:10:53.855806   40135 start.go:340] cluster config:
	{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:53.856022   40135 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:53.857966   40135 out.go:177] * Starting "multinode-736061" primary control-plane node in "multinode-736061" cluster
	I0916 11:10:53.859309   40135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:10:53.859342   40135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:10:53.859351   40135 cache.go:56] Caching tarball of preloaded images
	I0916 11:10:53.859419   40135 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:10:53.859428   40135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:10:53.859533   40135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:10:53.859726   40135 start.go:360] acquireMachinesLock for multinode-736061: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:10:53.859765   40135 start.go:364] duration metric: took 22.859µs to acquireMachinesLock for "multinode-736061"
	I0916 11:10:53.859779   40135 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:10:53.859786   40135 fix.go:54] fixHost starting: 
	I0916 11:10:53.860046   40135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:10:53.860077   40135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:53.874501   40135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0916 11:10:53.874913   40135 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:53.875410   40135 main.go:141] libmachine: Using API Version  1
	I0916 11:10:53.875431   40135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:53.875784   40135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:53.876057   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.876221   40135 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:10:53.877667   40135 fix.go:112] recreateIfNeeded on multinode-736061: state=Running err=<nil>
	W0916 11:10:53.877684   40135 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:10:53.880136   40135 out.go:177] * Updating the running kvm2 "multinode-736061" VM ...
	I0916 11:10:53.881210   40135 machine.go:93] provisionDockerMachine start ...
	I0916 11:10:53.881232   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.881421   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:53.883804   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:53.884294   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:53.884322   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:53.884407   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:53.884550   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:53.884689   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:53.884816   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:53.884984   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:53.885237   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:53.885252   40135 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:10:54.002517   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:10:54.002554   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.002793   40135 buildroot.go:166] provisioning hostname "multinode-736061"
	I0916 11:10:54.002819   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.003040   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.006032   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.006431   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.006466   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.006567   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.006771   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.006940   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.007101   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.007282   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.007489   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.007510   40135 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061 && echo "multinode-736061" | sudo tee /etc/hostname
	I0916 11:10:54.134028   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:10:54.134063   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.136916   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.137328   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.137354   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.137561   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.137782   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.137967   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.138136   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.138312   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.138554   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.138581   40135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:10:54.254218   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:10:54.254244   40135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:10:54.254262   40135 buildroot.go:174] setting up certificates
	I0916 11:10:54.254271   40135 provision.go:84] configureAuth start
	I0916 11:10:54.254279   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.254544   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:10:54.256878   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.257288   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.257330   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.257423   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.259620   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.259953   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.259972   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.260142   40135 provision.go:143] copyHostCerts
	I0916 11:10:54.260180   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:10:54.260205   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:10:54.260213   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:10:54.260282   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:10:54.260354   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:10:54.260374   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:10:54.260383   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:10:54.260419   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:10:54.260483   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:10:54.260506   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:10:54.260513   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:10:54.260536   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:10:54.260618   40135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-736061]
	I0916 11:10:54.392345   40135 provision.go:177] copyRemoteCerts
	I0916 11:10:54.392409   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:10:54.392437   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.394792   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.395075   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.395103   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.395239   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.395432   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.395580   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.395718   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:10:54.480886   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:10:54.480971   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:10:54.507550   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:10:54.507629   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:10:54.534283   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:10:54.534359   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:10:54.560933   40135 provision.go:87] duration metric: took 306.650302ms to configureAuth
	I0916 11:10:54.560963   40135 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:10:54.561214   40135 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:10:54.561286   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.564044   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.564377   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.564402   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.564575   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.564740   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.564908   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.565050   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.565204   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.565427   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.565450   40135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:12:25.365214   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:12:25.365240   40135 machine.go:96] duration metric: took 1m31.484014406s to provisionDockerMachine
	I0916 11:12:25.365255   40135 start.go:293] postStartSetup for "multinode-736061" (driver="kvm2")
	I0916 11:12:25.365269   40135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:25.365291   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.365801   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:25.365839   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.369181   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.369666   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.369698   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.369949   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.370163   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.370371   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.370519   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.457301   40135 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:25.461731   40135 command_runner.go:130] > NAME=Buildroot
	I0916 11:12:25.461752   40135 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:12:25.461757   40135 command_runner.go:130] > ID=buildroot
	I0916 11:12:25.461762   40135 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:12:25.461767   40135 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:12:25.461812   40135 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:12:25.461826   40135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:12:25.461899   40135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:12:25.461981   40135 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:12:25.461992   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:12:25.462072   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:25.472346   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:12:25.497363   40135 start.go:296] duration metric: took 132.094435ms for postStartSetup
	I0916 11:12:25.497437   40135 fix.go:56] duration metric: took 1m31.637627262s for fixHost
	I0916 11:12:25.497463   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.500226   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.500581   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.500610   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.500790   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.500971   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.501144   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.501372   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.501535   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:25.501715   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:12:25.501724   40135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:12:25.609971   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726485145.588914028
	
	I0916 11:12:25.609991   40135 fix.go:216] guest clock: 1726485145.588914028
	I0916 11:12:25.609998   40135 fix.go:229] Guest: 2024-09-16 11:12:25.588914028 +0000 UTC Remote: 2024-09-16 11:12:25.497444489 +0000 UTC m=+91.767542385 (delta=91.469539ms)
	I0916 11:12:25.610017   40135 fix.go:200] guest clock delta is within tolerance: 91.469539ms
	I0916 11:12:25.610022   40135 start.go:83] releasing machines lock for "multinode-736061", held for 1m31.750248345s
	I0916 11:12:25.610039   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.610285   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:12:25.613333   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.613834   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.613871   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.614019   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614475   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614637   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614712   40135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:25.614767   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.614820   40135 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:25.614838   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.617271   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617637   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617681   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.617697   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617822   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.617976   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.618123   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.618147   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.618163   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.618311   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.618338   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.618453   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.618578   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.618694   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.726440   40135 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:12:25.727099   40135 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 11:12:25.727256   40135 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:25.733715   40135 command_runner.go:130] > systemd 252 (252)
	I0916 11:12:25.733759   40135 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 11:12:25.733826   40135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:12:25.889015   40135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:25.896686   40135 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:12:25.897147   40135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:12:25.897213   40135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:25.906774   40135 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:12:25.906798   40135 start.go:495] detecting cgroup driver to use...
	I0916 11:12:25.906866   40135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:12:25.924150   40135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:12:25.938696   40135 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:25.938749   40135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:25.952927   40135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:25.967295   40135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:26.111243   40135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:26.252238   40135 docker.go:233] disabling docker service ...
	I0916 11:12:26.252310   40135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:26.269485   40135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:26.283580   40135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:26.423452   40135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:26.564033   40135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:26.578149   40135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:26.597842   40135 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:12:26.597888   40135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:12:26.597941   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.608772   40135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:12:26.608829   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.620194   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.631946   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.642904   40135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:26.653934   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.664685   40135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.676602   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.687924   40135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:26.698235   40135 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 11:12:26.698315   40135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:26.708091   40135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:26.843091   40135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:12:27.073301   40135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:12:27.073360   40135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:12:27.078455   40135 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:12:27.078472   40135 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:12:27.078478   40135 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0916 11:12:27.078485   40135 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:12:27.078490   40135 command_runner.go:130] > Access: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078504   40135 command_runner.go:130] > Modify: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078510   40135 command_runner.go:130] > Change: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078517   40135 command_runner.go:130] >  Birth: -
	I0916 11:12:27.078806   40135 start.go:563] Will wait 60s for crictl version
	I0916 11:12:27.078852   40135 ssh_runner.go:195] Run: which crictl
	I0916 11:12:27.082760   40135 command_runner.go:130] > /usr/bin/crictl
	I0916 11:12:27.082812   40135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:27.121054   40135 command_runner.go:130] > Version:  0.1.0
	I0916 11:12:27.121076   40135 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:12:27.121081   40135 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:12:27.121086   40135 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:12:27.121338   40135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:12:27.121408   40135 ssh_runner.go:195] Run: crio --version
	I0916 11:12:27.151162   40135 command_runner.go:130] > crio version 1.29.1
	I0916 11:12:27.151185   40135 command_runner.go:130] > Version:        1.29.1
	I0916 11:12:27.151194   40135 command_runner.go:130] > GitCommit:      unknown
	I0916 11:12:27.151201   40135 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:12:27.151206   40135 command_runner.go:130] > GitTreeState:   clean
	I0916 11:12:27.151214   40135 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:12:27.151221   40135 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:12:27.151227   40135 command_runner.go:130] > Compiler:       gc
	I0916 11:12:27.151233   40135 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:12:27.151239   40135 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:12:27.151249   40135 command_runner.go:130] > BuildTags:      
	I0916 11:12:27.151260   40135 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:12:27.151266   40135 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:12:27.151273   40135 command_runner.go:130] >   btrfs_noversion
	I0916 11:12:27.151280   40135 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:12:27.151289   40135 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:12:27.151295   40135 command_runner.go:130] >   seccomp
	I0916 11:12:27.151304   40135 command_runner.go:130] > LDFlags:          unknown
	I0916 11:12:27.151310   40135 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:12:27.151321   40135 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:12:27.151405   40135 ssh_runner.go:195] Run: crio --version
	I0916 11:12:27.181636   40135 command_runner.go:130] > crio version 1.29.1
	I0916 11:12:27.181664   40135 command_runner.go:130] > Version:        1.29.1
	I0916 11:12:27.181673   40135 command_runner.go:130] > GitCommit:      unknown
	I0916 11:12:27.181679   40135 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:12:27.181687   40135 command_runner.go:130] > GitTreeState:   clean
	I0916 11:12:27.181696   40135 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:12:27.181702   40135 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:12:27.181708   40135 command_runner.go:130] > Compiler:       gc
	I0916 11:12:27.181715   40135 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:12:27.181722   40135 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:12:27.181728   40135 command_runner.go:130] > BuildTags:      
	I0916 11:12:27.181736   40135 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:12:27.181742   40135 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:12:27.181752   40135 command_runner.go:130] >   btrfs_noversion
	I0916 11:12:27.181763   40135 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:12:27.181770   40135 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:12:27.181778   40135 command_runner.go:130] >   seccomp
	I0916 11:12:27.181786   40135 command_runner.go:130] > LDFlags:          unknown
	I0916 11:12:27.181796   40135 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:12:27.181802   40135 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:12:27.183887   40135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:12:27.185243   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:12:27.187794   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:27.188123   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:27.188146   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:27.188367   40135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:27.192571   40135 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 11:12:27.192739   40135 kubeadm.go:883] updating cluster {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:27.192900   40135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:12:27.192958   40135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:27.238779   40135 command_runner.go:130] > {
	I0916 11:12:27.238813   40135 command_runner.go:130] >   "images": [
	I0916 11:12:27.238818   40135 command_runner.go:130] >     {
	I0916 11:12:27.238825   40135 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:12:27.238830   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.238836   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:12:27.238839   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238844   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.238852   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:12:27.238859   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:12:27.238863   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238870   40135 command_runner.go:130] >       "size": "87190579",
	I0916 11:12:27.238877   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.238884   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.238893   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.238907   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.238911   40135 command_runner.go:130] >     },
	I0916 11:12:27.238915   40135 command_runner.go:130] >     {
	I0916 11:12:27.238921   40135 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:12:27.238926   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.238931   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:12:27.238935   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238939   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.238947   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:12:27.238958   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:12:27.238969   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238976   40135 command_runner.go:130] >       "size": "1363676",
	I0916 11:12:27.238982   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.238991   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239000   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239006   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239012   40135 command_runner.go:130] >     },
	I0916 11:12:27.239019   40135 command_runner.go:130] >     {
	I0916 11:12:27.239025   40135 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:12:27.239029   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239034   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:12:27.239041   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239047   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239063   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:12:27.239078   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:12:27.239087   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239093   40135 command_runner.go:130] >       "size": "31470524",
	I0916 11:12:27.239103   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239109   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239116   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239121   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239129   40135 command_runner.go:130] >     },
	I0916 11:12:27.239135   40135 command_runner.go:130] >     {
	I0916 11:12:27.239149   40135 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:12:27.239158   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239168   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:12:27.239176   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239183   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239196   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:12:27.239213   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:12:27.239222   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239229   40135 command_runner.go:130] >       "size": "63273227",
	I0916 11:12:27.239238   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239245   40135 command_runner.go:130] >       "username": "nonroot",
	I0916 11:12:27.239254   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239264   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239272   40135 command_runner.go:130] >     },
	I0916 11:12:27.239277   40135 command_runner.go:130] >     {
	I0916 11:12:27.239286   40135 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:12:27.239291   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239300   40135 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:12:27.239309   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239316   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239329   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:12:27.239343   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:12:27.239351   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239358   40135 command_runner.go:130] >       "size": "149009664",
	I0916 11:12:27.239366   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239370   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239375   40135 command_runner.go:130] >       },
	I0916 11:12:27.239381   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239390   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239397   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239404   40135 command_runner.go:130] >     },
	I0916 11:12:27.239409   40135 command_runner.go:130] >     {
	I0916 11:12:27.239420   40135 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:12:27.239430   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239438   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:12:27.239447   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239452   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239463   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:12:27.239475   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:12:27.239484   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239493   40135 command_runner.go:130] >       "size": "95237600",
	I0916 11:12:27.239502   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239508   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239516   40135 command_runner.go:130] >       },
	I0916 11:12:27.239524   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239532   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239538   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239545   40135 command_runner.go:130] >     },
	I0916 11:12:27.239550   40135 command_runner.go:130] >     {
	I0916 11:12:27.239562   40135 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:12:27.239571   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239580   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:12:27.239589   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239596   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239611   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:12:27.239627   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:12:27.239635   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239639   40135 command_runner.go:130] >       "size": "89437508",
	I0916 11:12:27.239644   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239651   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239658   40135 command_runner.go:130] >       },
	I0916 11:12:27.239665   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239674   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239681   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239689   40135 command_runner.go:130] >     },
	I0916 11:12:27.239695   40135 command_runner.go:130] >     {
	I0916 11:12:27.239709   40135 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:12:27.239716   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239724   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:12:27.239728   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239735   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239758   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:12:27.239773   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:12:27.239779   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239790   40135 command_runner.go:130] >       "size": "92733849",
	I0916 11:12:27.239799   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239806   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239810   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239815   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239822   40135 command_runner.go:130] >     },
	I0916 11:12:27.239826   40135 command_runner.go:130] >     {
	I0916 11:12:27.239836   40135 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:12:27.239842   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239848   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:12:27.239854   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239860   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239871   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:12:27.239883   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:12:27.239889   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239895   40135 command_runner.go:130] >       "size": "68420934",
	I0916 11:12:27.239904   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239910   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239918   40135 command_runner.go:130] >       },
	I0916 11:12:27.239922   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239928   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239937   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239946   40135 command_runner.go:130] >     },
	I0916 11:12:27.239954   40135 command_runner.go:130] >     {
	I0916 11:12:27.239967   40135 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:12:27.239978   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239988   40135 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:12:27.239997   40135 command_runner.go:130] >       ],
	I0916 11:12:27.240004   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.240013   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:12:27.240027   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:12:27.240036   40135 command_runner.go:130] >       ],
	I0916 11:12:27.240046   40135 command_runner.go:130] >       "size": "742080",
	I0916 11:12:27.240054   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.240063   40135 command_runner.go:130] >         "value": "65535"
	I0916 11:12:27.240071   40135 command_runner.go:130] >       },
	I0916 11:12:27.240079   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.240087   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.240091   40135 command_runner.go:130] >       "pinned": true
	I0916 11:12:27.240097   40135 command_runner.go:130] >     }
	I0916 11:12:27.240102   40135 command_runner.go:130] >   ]
	I0916 11:12:27.240109   40135 command_runner.go:130] > }
	I0916 11:12:27.240330   40135 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:12:27.240345   40135 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:12:27.240399   40135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:27.285112   40135 command_runner.go:130] > {
	I0916 11:12:27.285150   40135 command_runner.go:130] >   "images": [
	I0916 11:12:27.285157   40135 command_runner.go:130] >     {
	I0916 11:12:27.285170   40135 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:12:27.285177   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285185   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:12:27.285190   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285197   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285211   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:12:27.285224   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:12:27.285229   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285240   40135 command_runner.go:130] >       "size": "87190579",
	I0916 11:12:27.285250   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285257   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285271   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285279   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285283   40135 command_runner.go:130] >     },
	I0916 11:12:27.285288   40135 command_runner.go:130] >     {
	I0916 11:12:27.285301   40135 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:12:27.285308   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285319   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:12:27.285331   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285341   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285356   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:12:27.285367   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:12:27.285374   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285381   40135 command_runner.go:130] >       "size": "1363676",
	I0916 11:12:27.285389   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285399   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285407   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285414   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285423   40135 command_runner.go:130] >     },
	I0916 11:12:27.285428   40135 command_runner.go:130] >     {
	I0916 11:12:27.285441   40135 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:12:27.285450   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285460   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:12:27.285467   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285472   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285480   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:12:27.285490   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:12:27.285496   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285500   40135 command_runner.go:130] >       "size": "31470524",
	I0916 11:12:27.285506   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285510   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285515   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285521   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285524   40135 command_runner.go:130] >     },
	I0916 11:12:27.285528   40135 command_runner.go:130] >     {
	I0916 11:12:27.285534   40135 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:12:27.285540   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285547   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:12:27.285552   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285556   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285563   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:12:27.285577   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:12:27.285582   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285586   40135 command_runner.go:130] >       "size": "63273227",
	I0916 11:12:27.285591   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285596   40135 command_runner.go:130] >       "username": "nonroot",
	I0916 11:12:27.285602   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285606   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285610   40135 command_runner.go:130] >     },
	I0916 11:12:27.285613   40135 command_runner.go:130] >     {
	I0916 11:12:27.285619   40135 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:12:27.285624   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285628   40135 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:12:27.285631   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285635   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285644   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:12:27.285651   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:12:27.285656   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285661   40135 command_runner.go:130] >       "size": "149009664",
	I0916 11:12:27.285664   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285668   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285671   40135 command_runner.go:130] >       },
	I0916 11:12:27.285675   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285680   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285685   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285689   40135 command_runner.go:130] >     },
	I0916 11:12:27.285692   40135 command_runner.go:130] >     {
	I0916 11:12:27.285698   40135 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:12:27.285704   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285709   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:12:27.285712   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285716   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285723   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:12:27.285731   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:12:27.285737   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285741   40135 command_runner.go:130] >       "size": "95237600",
	I0916 11:12:27.285745   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285749   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285752   40135 command_runner.go:130] >       },
	I0916 11:12:27.285756   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285760   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285764   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285767   40135 command_runner.go:130] >     },
	I0916 11:12:27.285771   40135 command_runner.go:130] >     {
	I0916 11:12:27.285777   40135 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:12:27.285781   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285787   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:12:27.285796   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285800   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285808   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:12:27.285816   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:12:27.285821   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285825   40135 command_runner.go:130] >       "size": "89437508",
	I0916 11:12:27.285829   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285835   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285839   40135 command_runner.go:130] >       },
	I0916 11:12:27.285843   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285847   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285851   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285854   40135 command_runner.go:130] >     },
	I0916 11:12:27.285857   40135 command_runner.go:130] >     {
	I0916 11:12:27.285865   40135 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:12:27.285869   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285875   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:12:27.285878   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285882   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285904   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:12:27.285914   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:12:27.285918   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285923   40135 command_runner.go:130] >       "size": "92733849",
	I0916 11:12:27.285926   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285930   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285934   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285938   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285941   40135 command_runner.go:130] >     },
	I0916 11:12:27.285944   40135 command_runner.go:130] >     {
	I0916 11:12:27.285951   40135 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:12:27.285956   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285961   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:12:27.285964   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285968   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285975   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:12:27.285984   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:12:27.285987   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285992   40135 command_runner.go:130] >       "size": "68420934",
	I0916 11:12:27.285998   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.286002   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.286005   40135 command_runner.go:130] >       },
	I0916 11:12:27.286009   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.286013   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.286017   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.286022   40135 command_runner.go:130] >     },
	I0916 11:12:27.286027   40135 command_runner.go:130] >     {
	I0916 11:12:27.286033   40135 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:12:27.286040   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.286044   40135 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:12:27.286050   40135 command_runner.go:130] >       ],
	I0916 11:12:27.286054   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.286061   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:12:27.286069   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:12:27.286074   40135 command_runner.go:130] >       ],
	I0916 11:12:27.286080   40135 command_runner.go:130] >       "size": "742080",
	I0916 11:12:27.286084   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.286090   40135 command_runner.go:130] >         "value": "65535"
	I0916 11:12:27.286094   40135 command_runner.go:130] >       },
	I0916 11:12:27.286098   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.286101   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.286107   40135 command_runner.go:130] >       "pinned": true
	I0916 11:12:27.286111   40135 command_runner.go:130] >     }
	I0916 11:12:27.286114   40135 command_runner.go:130] >   ]
	I0916 11:12:27.286117   40135 command_runner.go:130] > }
	I0916 11:12:27.286227   40135 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:12:27.286237   40135 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:27.286244   40135 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 11:12:27.286331   40135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:27.286392   40135 ssh_runner.go:195] Run: crio config
	I0916 11:12:27.326001   40135 command_runner.go:130] ! time="2024-09-16 11:12:27.304932753Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 11:12:27.332712   40135 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 11:12:27.346533   40135 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 11:12:27.346557   40135 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 11:12:27.346564   40135 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 11:12:27.346567   40135 command_runner.go:130] > #
	I0916 11:12:27.346573   40135 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 11:12:27.346580   40135 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 11:12:27.346585   40135 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 11:12:27.346594   40135 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 11:12:27.346599   40135 command_runner.go:130] > # reload'.
	I0916 11:12:27.346605   40135 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 11:12:27.346611   40135 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 11:12:27.346617   40135 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 11:12:27.346625   40135 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 11:12:27.346629   40135 command_runner.go:130] > [crio]
	I0916 11:12:27.346634   40135 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 11:12:27.346641   40135 command_runner.go:130] > # containers images, in this directory.
	I0916 11:12:27.346646   40135 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 11:12:27.346655   40135 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 11:12:27.346674   40135 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 11:12:27.346683   40135 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 11:12:27.346690   40135 command_runner.go:130] > # imagestore = ""
	I0916 11:12:27.346696   40135 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 11:12:27.346705   40135 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 11:12:27.346710   40135 command_runner.go:130] > storage_driver = "overlay"
	I0916 11:12:27.346716   40135 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 11:12:27.346723   40135 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 11:12:27.346730   40135 command_runner.go:130] > storage_option = [
	I0916 11:12:27.346736   40135 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 11:12:27.346742   40135 command_runner.go:130] > ]
	I0916 11:12:27.346748   40135 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 11:12:27.346756   40135 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 11:12:27.346762   40135 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 11:12:27.346769   40135 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 11:12:27.346775   40135 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 11:12:27.346782   40135 command_runner.go:130] > # always happen on a node reboot
	I0916 11:12:27.346787   40135 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 11:12:27.346797   40135 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 11:12:27.346805   40135 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 11:12:27.346811   40135 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 11:12:27.346818   40135 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 11:12:27.346825   40135 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 11:12:27.346834   40135 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 11:12:27.346840   40135 command_runner.go:130] > # internal_wipe = true
	I0916 11:12:27.346849   40135 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 11:12:27.346856   40135 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 11:12:27.346863   40135 command_runner.go:130] > # internal_repair = false
	I0916 11:12:27.346874   40135 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 11:12:27.346883   40135 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 11:12:27.346890   40135 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 11:12:27.346897   40135 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 11:12:27.346904   40135 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 11:12:27.346909   40135 command_runner.go:130] > [crio.api]
	I0916 11:12:27.346915   40135 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 11:12:27.346921   40135 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 11:12:27.346927   40135 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 11:12:27.346933   40135 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 11:12:27.346940   40135 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 11:12:27.346947   40135 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 11:12:27.346951   40135 command_runner.go:130] > # stream_port = "0"
	I0916 11:12:27.346957   40135 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 11:12:27.346964   40135 command_runner.go:130] > # stream_enable_tls = false
	I0916 11:12:27.346970   40135 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 11:12:27.346976   40135 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 11:12:27.346982   40135 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 11:12:27.346990   40135 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 11:12:27.346995   40135 command_runner.go:130] > # minutes.
	I0916 11:12:27.346999   40135 command_runner.go:130] > # stream_tls_cert = ""
	I0916 11:12:27.347007   40135 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 11:12:27.347015   40135 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 11:12:27.347021   40135 command_runner.go:130] > # stream_tls_key = ""
	I0916 11:12:27.347026   40135 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 11:12:27.347034   40135 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 11:12:27.347049   40135 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 11:12:27.347055   40135 command_runner.go:130] > # stream_tls_ca = ""
	I0916 11:12:27.347065   40135 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:12:27.347071   40135 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 11:12:27.347078   40135 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:12:27.347085   40135 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 11:12:27.347091   40135 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 11:12:27.347099   40135 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 11:12:27.347105   40135 command_runner.go:130] > [crio.runtime]
	I0916 11:12:27.347111   40135 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 11:12:27.347118   40135 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 11:12:27.347124   40135 command_runner.go:130] > # "nofile=1024:2048"
	I0916 11:12:27.347130   40135 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 11:12:27.347135   40135 command_runner.go:130] > # default_ulimits = [
	I0916 11:12:27.347139   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347144   40135 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 11:12:27.347150   40135 command_runner.go:130] > # no_pivot = false
	I0916 11:12:27.347156   40135 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 11:12:27.347164   40135 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 11:12:27.347171   40135 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 11:12:27.347177   40135 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 11:12:27.347184   40135 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 11:12:27.347194   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:12:27.347200   40135 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 11:12:27.347205   40135 command_runner.go:130] > # Cgroup setting for conmon
	I0916 11:12:27.347214   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 11:12:27.347219   40135 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 11:12:27.347225   40135 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 11:12:27.347234   40135 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 11:12:27.347242   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:12:27.347247   40135 command_runner.go:130] > conmon_env = [
	I0916 11:12:27.347253   40135 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:12:27.347258   40135 command_runner.go:130] > ]
	I0916 11:12:27.347263   40135 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 11:12:27.347270   40135 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 11:12:27.347276   40135 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 11:12:27.347282   40135 command_runner.go:130] > # default_env = [
	I0916 11:12:27.347285   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347293   40135 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 11:12:27.347300   40135 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 11:12:27.347306   40135 command_runner.go:130] > # selinux = false
	I0916 11:12:27.347312   40135 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 11:12:27.347320   40135 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 11:12:27.347328   40135 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 11:12:27.347332   40135 command_runner.go:130] > # seccomp_profile = ""
	I0916 11:12:27.347340   40135 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 11:12:27.347345   40135 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 11:12:27.347353   40135 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 11:12:27.347358   40135 command_runner.go:130] > # which might increase security.
	I0916 11:12:27.347363   40135 command_runner.go:130] > # This option is currently deprecated,
	I0916 11:12:27.347370   40135 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 11:12:27.347375   40135 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 11:12:27.347383   40135 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 11:12:27.347391   40135 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 11:12:27.347399   40135 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 11:12:27.347407   40135 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 11:12:27.347414   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.347419   40135 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 11:12:27.347426   40135 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 11:12:27.347430   40135 command_runner.go:130] > # the cgroup blockio controller.
	I0916 11:12:27.347435   40135 command_runner.go:130] > # blockio_config_file = ""
	I0916 11:12:27.347441   40135 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 11:12:27.347446   40135 command_runner.go:130] > # blockio parameters.
	I0916 11:12:27.347450   40135 command_runner.go:130] > # blockio_reload = false
	I0916 11:12:27.347458   40135 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 11:12:27.347466   40135 command_runner.go:130] > # irqbalance daemon.
	I0916 11:12:27.347470   40135 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 11:12:27.347478   40135 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 11:12:27.347488   40135 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 11:12:27.347497   40135 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 11:12:27.347503   40135 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 11:12:27.347511   40135 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 11:12:27.347517   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.347523   40135 command_runner.go:130] > # rdt_config_file = ""
	I0916 11:12:27.347528   40135 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 11:12:27.347535   40135 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 11:12:27.347550   40135 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 11:12:27.347556   40135 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 11:12:27.347562   40135 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 11:12:27.347568   40135 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 11:12:27.347574   40135 command_runner.go:130] > # will be added.
	I0916 11:12:27.347578   40135 command_runner.go:130] > # default_capabilities = [
	I0916 11:12:27.347583   40135 command_runner.go:130] > # 	"CHOWN",
	I0916 11:12:27.347588   40135 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 11:12:27.347594   40135 command_runner.go:130] > # 	"FSETID",
	I0916 11:12:27.347597   40135 command_runner.go:130] > # 	"FOWNER",
	I0916 11:12:27.347603   40135 command_runner.go:130] > # 	"SETGID",
	I0916 11:12:27.347607   40135 command_runner.go:130] > # 	"SETUID",
	I0916 11:12:27.347613   40135 command_runner.go:130] > # 	"SETPCAP",
	I0916 11:12:27.347617   40135 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 11:12:27.347621   40135 command_runner.go:130] > # 	"KILL",
	I0916 11:12:27.347624   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347632   40135 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 11:12:27.347640   40135 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 11:12:27.347645   40135 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 11:12:27.347653   40135 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 11:12:27.347659   40135 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:12:27.347665   40135 command_runner.go:130] > default_sysctls = [
	I0916 11:12:27.347669   40135 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 11:12:27.347673   40135 command_runner.go:130] > ]
	I0916 11:12:27.347677   40135 command_runner.go:130] > # List of devices on the host that a
	I0916 11:12:27.347684   40135 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 11:12:27.347688   40135 command_runner.go:130] > # allowed_devices = [
	I0916 11:12:27.347694   40135 command_runner.go:130] > # 	"/dev/fuse",
	I0916 11:12:27.347697   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347705   40135 command_runner.go:130] > # List of additional devices. specified as
	I0916 11:12:27.347712   40135 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 11:12:27.347719   40135 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 11:12:27.347724   40135 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:12:27.347731   40135 command_runner.go:130] > # additional_devices = [
	I0916 11:12:27.347734   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347741   40135 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 11:12:27.347747   40135 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 11:12:27.347751   40135 command_runner.go:130] > # 	"/etc/cdi",
	I0916 11:12:27.347757   40135 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 11:12:27.347761   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347769   40135 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 11:12:27.347777   40135 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 11:12:27.347784   40135 command_runner.go:130] > # Defaults to false.
	I0916 11:12:27.347789   40135 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 11:12:27.347798   40135 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 11:12:27.347806   40135 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 11:12:27.347811   40135 command_runner.go:130] > # hooks_dir = [
	I0916 11:12:27.347816   40135 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 11:12:27.347821   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347827   40135 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 11:12:27.347835   40135 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 11:12:27.347840   40135 command_runner.go:130] > # its default mounts from the following two files:
	I0916 11:12:27.347843   40135 command_runner.go:130] > #
	I0916 11:12:27.347851   40135 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 11:12:27.347858   40135 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 11:12:27.347865   40135 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 11:12:27.347868   40135 command_runner.go:130] > #
	I0916 11:12:27.347881   40135 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 11:12:27.347887   40135 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 11:12:27.347895   40135 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 11:12:27.347902   40135 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 11:12:27.347905   40135 command_runner.go:130] > #
	I0916 11:12:27.347912   40135 command_runner.go:130] > # default_mounts_file = ""
	I0916 11:12:27.347917   40135 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 11:12:27.347925   40135 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 11:12:27.347931   40135 command_runner.go:130] > pids_limit = 1024
	I0916 11:12:27.347937   40135 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 11:12:27.347945   40135 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 11:12:27.347954   40135 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 11:12:27.347962   40135 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 11:12:27.347968   40135 command_runner.go:130] > # log_size_max = -1
	I0916 11:12:27.347975   40135 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 11:12:27.347981   40135 command_runner.go:130] > # log_to_journald = false
	I0916 11:12:27.347987   40135 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 11:12:27.347994   40135 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 11:12:27.347999   40135 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 11:12:27.348006   40135 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 11:12:27.348012   40135 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 11:12:27.348018   40135 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 11:12:27.348024   40135 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 11:12:27.348030   40135 command_runner.go:130] > # read_only = false
	I0916 11:12:27.348036   40135 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 11:12:27.348044   40135 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 11:12:27.348050   40135 command_runner.go:130] > # live configuration reload.
	I0916 11:12:27.348054   40135 command_runner.go:130] > # log_level = "info"
	I0916 11:12:27.348062   40135 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 11:12:27.348068   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.348073   40135 command_runner.go:130] > # log_filter = ""
	I0916 11:12:27.348079   40135 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 11:12:27.348087   40135 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 11:12:27.348093   40135 command_runner.go:130] > # separated by comma.
	I0916 11:12:27.348100   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348106   40135 command_runner.go:130] > # uid_mappings = ""
	I0916 11:12:27.348112   40135 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 11:12:27.348118   40135 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 11:12:27.348124   40135 command_runner.go:130] > # separated by comma.
	I0916 11:12:27.348132   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348138   40135 command_runner.go:130] > # gid_mappings = ""
	I0916 11:12:27.348144   40135 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 11:12:27.348152   40135 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:12:27.348158   40135 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:12:27.348168   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348175   40135 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 11:12:27.348181   40135 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 11:12:27.348189   40135 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:12:27.348197   40135 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:12:27.348204   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348210   40135 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 11:12:27.348216   40135 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 11:12:27.348224   40135 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 11:12:27.348230   40135 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 11:12:27.348237   40135 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 11:12:27.348243   40135 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 11:12:27.348250   40135 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 11:12:27.348257   40135 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 11:12:27.348262   40135 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 11:12:27.348268   40135 command_runner.go:130] > drop_infra_ctr = false
	I0916 11:12:27.348274   40135 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 11:12:27.348281   40135 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 11:12:27.348288   40135 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 11:12:27.348294   40135 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 11:12:27.348301   40135 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 11:12:27.348308   40135 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 11:12:27.348314   40135 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 11:12:27.348321   40135 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 11:12:27.348324   40135 command_runner.go:130] > # shared_cpuset = ""
	I0916 11:12:27.348330   40135 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 11:12:27.348336   40135 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 11:12:27.348341   40135 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 11:12:27.348349   40135 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 11:12:27.348354   40135 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 11:12:27.348359   40135 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 11:12:27.348368   40135 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 11:12:27.348371   40135 command_runner.go:130] > # enable_criu_support = false
	I0916 11:12:27.348377   40135 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 11:12:27.348385   40135 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 11:12:27.348389   40135 command_runner.go:130] > # enable_pod_events = false
	I0916 11:12:27.348397   40135 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:12:27.348405   40135 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:12:27.348410   40135 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 11:12:27.348416   40135 command_runner.go:130] > # default_runtime = "runc"
	I0916 11:12:27.348421   40135 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 11:12:27.348430   40135 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 11:12:27.348443   40135 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 11:12:27.348450   40135 command_runner.go:130] > # creation as a file is not desired either.
	I0916 11:12:27.348458   40135 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 11:12:27.348463   40135 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 11:12:27.348470   40135 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 11:12:27.348473   40135 command_runner.go:130] > # ]
	I0916 11:12:27.348487   40135 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 11:12:27.348493   40135 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 11:12:27.348501   40135 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 11:12:27.348508   40135 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 11:12:27.348511   40135 command_runner.go:130] > #
	I0916 11:12:27.348516   40135 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 11:12:27.348522   40135 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 11:12:27.348540   40135 command_runner.go:130] > # runtime_type = "oci"
	I0916 11:12:27.348546   40135 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 11:12:27.348551   40135 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 11:12:27.348557   40135 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 11:12:27.348562   40135 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 11:12:27.348568   40135 command_runner.go:130] > # monitor_env = []
	I0916 11:12:27.348573   40135 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 11:12:27.348579   40135 command_runner.go:130] > # allowed_annotations = []
	I0916 11:12:27.348584   40135 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 11:12:27.348590   40135 command_runner.go:130] > # Where:
	I0916 11:12:27.348595   40135 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 11:12:27.348603   40135 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 11:12:27.348612   40135 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 11:12:27.348618   40135 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 11:12:27.348623   40135 command_runner.go:130] > #   in $PATH.
	I0916 11:12:27.348629   40135 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 11:12:27.348636   40135 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 11:12:27.348642   40135 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 11:12:27.348647   40135 command_runner.go:130] > #   state.
	I0916 11:12:27.348654   40135 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 11:12:27.348662   40135 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 11:12:27.348670   40135 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 11:12:27.348676   40135 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 11:12:27.348682   40135 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 11:12:27.348690   40135 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 11:12:27.348696   40135 command_runner.go:130] > #   The currently recognized values are:
	I0916 11:12:27.348704   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 11:12:27.348713   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 11:12:27.348721   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 11:12:27.348727   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 11:12:27.348736   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 11:12:27.348744   40135 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 11:12:27.348751   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 11:12:27.348759   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 11:12:27.348766   40135 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 11:12:27.348774   40135 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 11:12:27.348781   40135 command_runner.go:130] > #   deprecated option "conmon".
	I0916 11:12:27.348788   40135 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 11:12:27.348795   40135 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 11:12:27.348801   40135 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 11:12:27.348808   40135 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 11:12:27.348814   40135 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 11:12:27.348820   40135 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 11:12:27.348827   40135 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 11:12:27.348834   40135 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 11:12:27.348837   40135 command_runner.go:130] > #
	I0916 11:12:27.348842   40135 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 11:12:27.348846   40135 command_runner.go:130] > #
	I0916 11:12:27.348852   40135 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 11:12:27.348859   40135 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 11:12:27.348865   40135 command_runner.go:130] > #
	I0916 11:12:27.348874   40135 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 11:12:27.348882   40135 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 11:12:27.348886   40135 command_runner.go:130] > #
	I0916 11:12:27.348894   40135 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 11:12:27.348898   40135 command_runner.go:130] > # feature.
	I0916 11:12:27.348902   40135 command_runner.go:130] > #
	I0916 11:12:27.348908   40135 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 11:12:27.348917   40135 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 11:12:27.348925   40135 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 11:12:27.348933   40135 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 11:12:27.348940   40135 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 11:12:27.348949   40135 command_runner.go:130] > #
	I0916 11:12:27.348956   40135 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 11:12:27.348964   40135 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 11:12:27.348967   40135 command_runner.go:130] > #
	I0916 11:12:27.348974   40135 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 11:12:27.348981   40135 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 11:12:27.348984   40135 command_runner.go:130] > #
	I0916 11:12:27.348992   40135 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 11:12:27.348998   40135 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 11:12:27.349003   40135 command_runner.go:130] > # limitation.
	I0916 11:12:27.349008   40135 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 11:12:27.349014   40135 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 11:12:27.349018   40135 command_runner.go:130] > runtime_type = "oci"
	I0916 11:12:27.349024   40135 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 11:12:27.349028   40135 command_runner.go:130] > runtime_config_path = ""
	I0916 11:12:27.349034   40135 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 11:12:27.349038   40135 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 11:12:27.349044   40135 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 11:12:27.349048   40135 command_runner.go:130] > monitor_env = [
	I0916 11:12:27.349056   40135 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:12:27.349059   40135 command_runner.go:130] > ]
	I0916 11:12:27.349064   40135 command_runner.go:130] > privileged_without_host_devices = false
	I0916 11:12:27.349084   40135 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 11:12:27.349094   40135 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 11:12:27.349101   40135 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 11:12:27.349110   40135 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 11:12:27.349120   40135 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 11:12:27.349140   40135 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 11:12:27.349157   40135 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 11:12:27.349169   40135 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 11:12:27.349177   40135 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 11:12:27.349187   40135 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 11:12:27.349192   40135 command_runner.go:130] > # Example:
	I0916 11:12:27.349198   40135 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 11:12:27.349204   40135 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 11:12:27.349209   40135 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 11:12:27.349216   40135 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 11:12:27.349220   40135 command_runner.go:130] > # cpuset = 0
	I0916 11:12:27.349224   40135 command_runner.go:130] > # cpushares = "0-1"
	I0916 11:12:27.349229   40135 command_runner.go:130] > # Where:
	I0916 11:12:27.349234   40135 command_runner.go:130] > # The workload name is workload-type.
	I0916 11:12:27.349242   40135 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 11:12:27.349250   40135 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 11:12:27.349255   40135 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 11:12:27.349265   40135 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 11:12:27.349272   40135 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 11:12:27.349279   40135 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 11:12:27.349286   40135 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 11:12:27.349292   40135 command_runner.go:130] > # Default value is set to true
	I0916 11:12:27.349296   40135 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 11:12:27.349303   40135 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 11:12:27.349308   40135 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 11:12:27.349314   40135 command_runner.go:130] > # Default value is set to 'false'
	I0916 11:12:27.349318   40135 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 11:12:27.349324   40135 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 11:12:27.349330   40135 command_runner.go:130] > #
	I0916 11:12:27.349336   40135 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 11:12:27.349342   40135 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 11:12:27.349348   40135 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 11:12:27.349354   40135 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 11:12:27.349359   40135 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 11:12:27.349363   40135 command_runner.go:130] > [crio.image]
	I0916 11:12:27.349368   40135 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 11:12:27.349372   40135 command_runner.go:130] > # default_transport = "docker://"
	I0916 11:12:27.349378   40135 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 11:12:27.349384   40135 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:12:27.349387   40135 command_runner.go:130] > # global_auth_file = ""
	I0916 11:12:27.349392   40135 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 11:12:27.349396   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.349400   40135 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 11:12:27.349406   40135 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 11:12:27.349411   40135 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:12:27.349415   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.349419   40135 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 11:12:27.349424   40135 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 11:12:27.349430   40135 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 11:12:27.349435   40135 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 11:12:27.349441   40135 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 11:12:27.349445   40135 command_runner.go:130] > # pause_command = "/pause"
	I0916 11:12:27.349450   40135 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 11:12:27.349456   40135 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 11:12:27.349461   40135 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 11:12:27.349468   40135 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 11:12:27.349476   40135 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 11:12:27.349482   40135 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 11:12:27.349488   40135 command_runner.go:130] > # pinned_images = [
	I0916 11:12:27.349491   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349498   40135 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 11:12:27.349506   40135 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 11:12:27.349513   40135 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 11:12:27.349525   40135 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 11:12:27.349533   40135 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 11:12:27.349539   40135 command_runner.go:130] > # signature_policy = ""
	I0916 11:12:27.349544   40135 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 11:12:27.349553   40135 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 11:12:27.349561   40135 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 11:12:27.349567   40135 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 11:12:27.349575   40135 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 11:12:27.349579   40135 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 11:12:27.349587   40135 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 11:12:27.349595   40135 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 11:12:27.349599   40135 command_runner.go:130] > # changing them here.
	I0916 11:12:27.349610   40135 command_runner.go:130] > # insecure_registries = [
	I0916 11:12:27.349613   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349620   40135 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 11:12:27.349626   40135 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 11:12:27.349630   40135 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 11:12:27.349635   40135 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 11:12:27.349642   40135 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 11:12:27.349648   40135 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 11:12:27.349653   40135 command_runner.go:130] > # CNI plugins.
	I0916 11:12:27.349657   40135 command_runner.go:130] > [crio.network]
	I0916 11:12:27.349663   40135 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 11:12:27.349670   40135 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 11:12:27.349674   40135 command_runner.go:130] > # cni_default_network = ""
	I0916 11:12:27.349688   40135 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 11:12:27.349692   40135 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 11:12:27.349700   40135 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 11:12:27.349706   40135 command_runner.go:130] > # plugin_dirs = [
	I0916 11:12:27.349710   40135 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 11:12:27.349716   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349721   40135 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 11:12:27.349727   40135 command_runner.go:130] > [crio.metrics]
	I0916 11:12:27.349732   40135 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 11:12:27.349739   40135 command_runner.go:130] > enable_metrics = true
	I0916 11:12:27.349743   40135 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 11:12:27.349751   40135 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 11:12:27.349757   40135 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 11:12:27.349765   40135 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 11:12:27.349772   40135 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 11:12:27.349777   40135 command_runner.go:130] > # metrics_collectors = [
	I0916 11:12:27.349782   40135 command_runner.go:130] > # 	"operations",
	I0916 11:12:27.349787   40135 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 11:12:27.349793   40135 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 11:12:27.349798   40135 command_runner.go:130] > # 	"operations_errors",
	I0916 11:12:27.349804   40135 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 11:12:27.349808   40135 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 11:12:27.349814   40135 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 11:12:27.349818   40135 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 11:12:27.349824   40135 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 11:12:27.349828   40135 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 11:12:27.349835   40135 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 11:12:27.349839   40135 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 11:12:27.349845   40135 command_runner.go:130] > # 	"containers_oom_total",
	I0916 11:12:27.349850   40135 command_runner.go:130] > # 	"containers_oom",
	I0916 11:12:27.349856   40135 command_runner.go:130] > # 	"processes_defunct",
	I0916 11:12:27.349860   40135 command_runner.go:130] > # 	"operations_total",
	I0916 11:12:27.349867   40135 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 11:12:27.349875   40135 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 11:12:27.349882   40135 command_runner.go:130] > # 	"operations_errors_total",
	I0916 11:12:27.349886   40135 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 11:12:27.349892   40135 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 11:12:27.349897   40135 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 11:12:27.349903   40135 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 11:12:27.349907   40135 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 11:12:27.349914   40135 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 11:12:27.349919   40135 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 11:12:27.349925   40135 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 11:12:27.349928   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349934   40135 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 11:12:27.349939   40135 command_runner.go:130] > # metrics_port = 9090
	I0916 11:12:27.349944   40135 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 11:12:27.349950   40135 command_runner.go:130] > # metrics_socket = ""
	I0916 11:12:27.349954   40135 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 11:12:27.349962   40135 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 11:12:27.349971   40135 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 11:12:27.349977   40135 command_runner.go:130] > # certificate on any modification event.
	I0916 11:12:27.349981   40135 command_runner.go:130] > # metrics_cert = ""
	I0916 11:12:27.349988   40135 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 11:12:27.349994   40135 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 11:12:27.349999   40135 command_runner.go:130] > # metrics_key = ""
	I0916 11:12:27.350005   40135 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 11:12:27.350010   40135 command_runner.go:130] > [crio.tracing]
	I0916 11:12:27.350016   40135 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 11:12:27.350029   40135 command_runner.go:130] > # enable_tracing = false
	I0916 11:12:27.350034   40135 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 11:12:27.350041   40135 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 11:12:27.350048   40135 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 11:12:27.350054   40135 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 11:12:27.350058   40135 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 11:12:27.350064   40135 command_runner.go:130] > [crio.nri]
	I0916 11:12:27.350068   40135 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 11:12:27.350074   40135 command_runner.go:130] > # enable_nri = false
	I0916 11:12:27.350079   40135 command_runner.go:130] > # NRI socket to listen on.
	I0916 11:12:27.350085   40135 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 11:12:27.350090   40135 command_runner.go:130] > # NRI plugin directory to use.
	I0916 11:12:27.350096   40135 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 11:12:27.350101   40135 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 11:12:27.350108   40135 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 11:12:27.350114   40135 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 11:12:27.350120   40135 command_runner.go:130] > # nri_disable_connections = false
	I0916 11:12:27.350126   40135 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 11:12:27.350132   40135 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 11:12:27.350137   40135 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 11:12:27.350144   40135 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 11:12:27.350150   40135 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 11:12:27.350155   40135 command_runner.go:130] > [crio.stats]
	I0916 11:12:27.350161   40135 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 11:12:27.350168   40135 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 11:12:27.350172   40135 command_runner.go:130] > # stats_collection_period = 0
	I0916 11:12:27.350235   40135 cni.go:84] Creating CNI manager for ""
	I0916 11:12:27.350246   40135 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 11:12:27.350255   40135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:27.350273   40135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-736061 NodeName:multinode-736061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:27.350419   40135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-736061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:27.350474   40135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:27.361566   40135 command_runner.go:130] > kubeadm
	I0916 11:12:27.361580   40135 command_runner.go:130] > kubectl
	I0916 11:12:27.361584   40135 command_runner.go:130] > kubelet
	I0916 11:12:27.361736   40135 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:27.361782   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:27.372014   40135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:12:27.391186   40135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:27.408090   40135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 11:12:27.425238   40135 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:27.429573   40135 command_runner.go:130] > 192.168.39.32	control-plane.minikube.internal
	I0916 11:12:27.429655   40135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:27.566945   40135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:27.581910   40135 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.32
	I0916 11:12:27.581936   40135 certs.go:194] generating shared ca certs ...
	I0916 11:12:27.581957   40135 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:27.582115   40135 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:12:27.582167   40135 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:12:27.582177   40135 certs.go:256] generating profile certs ...
	I0916 11:12:27.582249   40135 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key
	I0916 11:12:27.582305   40135 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7
	I0916 11:12:27.582343   40135 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key
	I0916 11:12:27.582354   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:12:27.582365   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:12:27.582378   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:12:27.582390   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:12:27.582400   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:12:27.582410   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:12:27.582423   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:12:27.582436   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:12:27.582483   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:12:27.582509   40135 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:27.582518   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:27.582550   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:12:27.582574   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:27.582595   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:12:27.582631   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:12:27.582655   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.582667   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.582679   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.583263   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:27.609531   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:27.634944   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:27.660493   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:27.685235   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:12:27.708765   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:12:27.733626   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:27.757830   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:12:27.782527   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:27.806733   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:12:27.831538   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:12:27.856224   40135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:27.873368   40135 ssh_runner.go:195] Run: openssl version
	I0916 11:12:27.879163   40135 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:12:27.879396   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:12:27.890038   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894595   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894654   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894716   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.919619   40135 command_runner.go:130] > 51391683
	I0916 11:12:27.920420   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:27.932003   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:12:27.943754   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948079   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948103   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948147   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.953662   40135 command_runner.go:130] > 3ec20f2e
	I0916 11:12:27.953740   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:27.963952   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:27.975088   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979448   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979467   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979508   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.984970   40135 command_runner.go:130] > b5213941
	I0916 11:12:27.985201   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:27.995006   40135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:27.999529   40135 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:27.999557   40135 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 11:12:27.999566   40135 command_runner.go:130] > Device: 253,1	Inode: 2101800     Links: 1
	I0916 11:12:27.999605   40135 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:12:27.999620   40135 command_runner.go:130] > Access: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999631   40135 command_runner.go:130] > Modify: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999639   40135 command_runner.go:130] > Change: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999648   40135 command_runner.go:130] >  Birth: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999698   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:12:28.005429   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.005492   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:12:28.010927   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.011069   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:12:28.016675   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.016733   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:12:28.022268   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.022386   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:12:28.027951   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.028023   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:12:28.033400   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.033473   40135 kubeadm.go:392] StartCluster: {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:28.033571   40135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:28.033610   40135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:28.072849   40135 command_runner.go:130] > 840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd
	I0916 11:12:28.072892   40135 command_runner.go:130] > 02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198
	I0916 11:12:28.072902   40135 command_runner.go:130] > 7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0
	I0916 11:12:28.072914   40135 command_runner.go:130] > f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee
	I0916 11:12:28.072924   40135 command_runner.go:130] > b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762
	I0916 11:12:28.072933   40135 command_runner.go:130] > 769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24
	I0916 11:12:28.072942   40135 command_runner.go:130] > d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba
	I0916 11:12:28.072951   40135 command_runner.go:130] > ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7
	I0916 11:12:28.072976   40135 cri.go:89] found id: "840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd"
	I0916 11:12:28.072988   40135 cri.go:89] found id: "02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198"
	I0916 11:12:28.072993   40135 cri.go:89] found id: "7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0"
	I0916 11:12:28.072998   40135 cri.go:89] found id: "f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee"
	I0916 11:12:28.073002   40135 cri.go:89] found id: "b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762"
	I0916 11:12:28.073007   40135 cri.go:89] found id: "769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24"
	I0916 11:12:28.073010   40135 cri.go:89] found id: "d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba"
	I0916 11:12:28.073014   40135 cri.go:89] found id: "ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7"
	I0916 11:12:28.073018   40135 cri.go:89] found id: ""
	I0916 11:12:28.073069   40135 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.045460451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485253045395203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7eacbd5-c5a4-43e5-bf66-82f6864dd15e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.048589563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=030223b4-02b0-4dac-beb6-dc47667c05a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.048650084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=030223b4-02b0-4dac-beb6-dc47667c05a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.048953026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=030223b4-02b0-4dac-beb6-dc47667c05a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.091682436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d0b6ef9-efdd-4d2e-b3f1-e547e9252867 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.091778847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d0b6ef9-efdd-4d2e-b3f1-e547e9252867 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.093444941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3b3490b-5c21-4e95-aecd-5d989995534f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.093822774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485253093800241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3b3490b-5c21-4e95-aecd-5d989995534f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.094262917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4977795-c5c4-47e2-906f-2e1f6daa562f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.094381436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4977795-c5c4-47e2-906f-2e1f6daa562f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.094736973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4977795-c5c4-47e2-906f-2e1f6daa562f name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.136907065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8b241ce-f0ae-4df6-a1d3-92e7a38696df name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.136996736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8b241ce-f0ae-4df6-a1d3-92e7a38696df name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.138596603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97a9ebce-ba6d-4b65-b45d-e6a216e5e47d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.139055168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485253139027217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97a9ebce-ba6d-4b65-b45d-e6a216e5e47d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.139710340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bf30220-68fe-4db1-906d-74e1f0701fdb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.139769173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bf30220-68fe-4db1-906d-74e1f0701fdb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.140093109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bf30220-68fe-4db1-906d-74e1f0701fdb name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.187922215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe3d681d-fa84-4965-8de1-439e01119a4c name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.188022109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe3d681d-fa84-4965-8de1-439e01119a4c name=/runtime.v1.RuntimeService/Version
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.189464015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9732518-b1b1-4fdf-9c75-fdadbdc63fc0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.189848191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485253189827563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9732518-b1b1-4fdf-9c75-fdadbdc63fc0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.190736656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28589711-8e84-4772-ae22-4cf40dff9ebc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.190810899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28589711-8e84-4772-ae22-4cf40dff9ebc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:14:13 multinode-736061 crio[2989]: time="2024-09-16 11:14:13.191141716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28589711-8e84-4772-ae22-4cf40dff9ebc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	522d3b85a4548       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c27596adc9769       busybox-7dff88458-g9fqk
	34160c655e5ab       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   d6609b6804e21       kindnet-qb4tq
	35a7839cd57d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   78066c652dd8f       coredns-7c65d6cfc9-nlhl2
	87a99d0015cbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b06a4343bbdd3       storage-provisioner
	2d81e17eebccf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   fcfacdd69a46c       kube-proxy-ftj9p
	2e7284c90c8c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   d9afb21537018       kube-scheduler-multinode-736061
	ae1251600e6e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   cd4168d0828d2       etcd-multinode-736061
	8fa850b5495ff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   f4286a53710f2       kube-apiserver-multinode-736061
	126fd7058d64d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   113acd43d732e       kube-controller-manager-multinode-736061
	84517e6af45b4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   779060032a611       busybox-7dff88458-g9fqk
	840a587a0926e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   19286465f900a       coredns-7c65d6cfc9-nlhl2
	02223ab182498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   01381d4d113d1       storage-provisioner
	7a89ff755837a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   bd141ffff1a91       kindnet-qb4tq
	f8c55edbe2173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   cc5264d1c4b52       kube-proxy-ftj9p
	b76d5d4ad419a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   f771edf6fcef2       kube-scheduler-multinode-736061
	769a75ad1934a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   6237db42cfa9d       etcd-multinode-736061
	d53f9aec7bc35       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   c1754b1d74547       kube-controller-manager-multinode-736061
	ed73e9089f633       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   06f23871be821       kube-apiserver-multinode-736061
	
	
	==> coredns [35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40656 - 6477 "HINFO IN 2586289926805624417.1154026984614338138. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767921s
	
	
	==> coredns [840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd] <==
	[INFO] 10.244.0.3:48472 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859185s
	[INFO] 10.244.0.3:58999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160969s
	[INFO] 10.244.0.3:35408 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007258s
	[INFO] 10.244.0.3:41914 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221958s
	[INFO] 10.244.0.3:51441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075035s
	[INFO] 10.244.0.3:54367 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064081s
	[INFO] 10.244.0.3:51073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061874s
	[INFO] 10.244.1.2:38827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130826s
	[INFO] 10.244.1.2:49788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142283s
	[INFO] 10.244.1.2:43407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083078s
	[INFO] 10.244.1.2:35506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123825s
	[INFO] 10.244.0.3:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008958s
	[INFO] 10.244.0.3:44801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055108s
	[INFO] 10.244.0.3:45405 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039898s
	[INFO] 10.244.0.3:53790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037364s
	[INFO] 10.244.1.2:44863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136337s
	[INFO] 10.244.1.2:38345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000494388s
	[INFO] 10.244.1.2:36190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000247796s
	[INFO] 10.244.1.2:38755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120111s
	[INFO] 10.244.0.3:58238 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129373s
	[INFO] 10.244.0.3:55519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102337s
	[INFO] 10.244.0.3:60945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061359s
	[INFO] 10.244.0.3:52747 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010905s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-736061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-736061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60fe80618d4f42e281d4c50393e9d89e
	  System UUID:                60fe8061-8d4f-42e2-81d4-c50393e9d89e
	  Boot ID:                    d046d280-229f-4e9a-8a6c-1986374da911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g9fqk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 coredns-7c65d6cfc9-nlhl2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m14s
	  kube-system                 etcd-multinode-736061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m20s
	  kube-system                 kindnet-qb4tq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m15s
	  kube-system                 kube-apiserver-multinode-736061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-controller-manager-multinode-736061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-proxy-ftj9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-scheduler-multinode-736061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m13s                  kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s (x7 over 8m26s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s                  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m15s                  node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  NodeReady                8m2s                   kubelet          Node multinode-736061 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s (x8 over 104s)    kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 104s)    kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 104s)    kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	
	
	Name:               multinode-736061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_13_11_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-736061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fe337504134150bccd557919449b29
	  System UUID:                d4fe3375-0413-4150-bccd-557919449b29
	  Boot ID:                    d98e6a6c-e943-4dd6-9c7a-051fe2e4235b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7dvrx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kindnet-xlrxb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m31s
	  kube-system                 kube-proxy-8h6jp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m25s                  kube-proxy  
	  Normal  Starting                 58s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m31s (x2 over 7m31s)  kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s (x2 over 7m31s)  kubelet     Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s (x2 over 7m31s)  kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m12s                  kubelet     Node multinode-736061-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet     Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet     Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                45s                    kubelet     Node multinode-736061-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.065798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064029] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.188943] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.125437] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281577] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.899790] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.897000] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059824] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.997335] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.078309] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.139976] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.076513] kauditd_printk_skb: 18 callbacks suppressed
	[Sep16 11:06] kauditd_printk_skb: 69 callbacks suppressed
	[Sep16 11:07] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 11:12] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +0.148062] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +0.171344] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +0.138643] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.279343] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +0.718595] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +2.178122] systemd-fstab-generator[3193]: Ignoring "noauto" option for root device
	[  +4.699068] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.680556] systemd-fstab-generator[4044]: Ignoring "noauto" option for root device
	[  +0.106179] kauditd_printk_skb: 34 callbacks suppressed
	[Sep16 11:13] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24] <==
	{"level":"info","ts":"2024-09-16T11:05:49.392766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.393463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:06:03.777149Z","caller":"traceutil/trace.go:171","msg":"trace[927915415] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"125.996547ms","start":"2024-09-16T11:06:03.651108Z","end":"2024-09-16T11:06:03.777104Z","steps":["trace[927915415] 'process raft request'  (duration: 125.663993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:06:42.434928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.290318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316539574759162275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" value_size:642 lease:7316539574759161296 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T11:06:42.435173Z","caller":"traceutil/trace.go:171","msg":"trace[736335181] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"242.745028ms","start":"2024-09-16T11:06:42.192402Z","end":"2024-09-16T11:06:42.435147Z","steps":["trace[736335181] 'process raft request'  (duration: 86.752839ms)","trace[736335181] 'compare'  (duration: 155.030741ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:06:42.435488Z","caller":"traceutil/trace.go:171","msg":"trace[1491776336] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"164.53116ms","start":"2024-09-16T11:06:42.270945Z","end":"2024-09-16T11:06:42.435476Z","steps":["trace[1491776336] 'process raft request'  (duration: 164.128437ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:36.191017Z","caller":"traceutil/trace.go:171","msg":"trace[1370350330] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"135.211812ms","start":"2024-09-16T11:07:36.055773Z","end":"2024-09-16T11:07:36.190985Z","steps":["trace[1370350330] 'read index received'  (duration: 127.332155ms)","trace[1370350330] 'applied index is now lower than readState.Index'  (duration: 7.878564ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:36.191190Z","caller":"traceutil/trace.go:171","msg":"trace[1606896706] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"230.440734ms","start":"2024-09-16T11:07:35.960732Z","end":"2024-09-16T11:07:36.191172Z","steps":["trace[1606896706] 'process raft request'  (duration: 222.394697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:07:36.191504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.712787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T11:07:36.191575Z","caller":"traceutil/trace.go:171","msg":"trace[641878152] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:0; response_revision:598; }","duration":"135.807158ms","start":"2024-09-16T11:07:36.055751Z","end":"2024-09-16T11:07:36.191558Z","steps":["trace[641878152] 'agreement among raft nodes before linearized reading'  (duration: 135.656463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:43.320131Z","caller":"traceutil/trace.go:171","msg":"trace[1026367264] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:677; }","duration":"256.510329ms","start":"2024-09-16T11:07:43.063604Z","end":"2024-09-16T11:07:43.320115Z","steps":["trace[1026367264] 'read index received'  (duration: 208.747621ms)","trace[1026367264] 'applied index is now lower than readState.Index'  (duration: 47.76201ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:43.320580Z","caller":"traceutil/trace.go:171","msg":"trace[845413732] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"283.063625ms","start":"2024-09-16T11:07:43.037497Z","end":"2024-09-16T11:07:43.320560Z","steps":["trace[845413732] 'process raft request'  (duration: 234.904981ms)","trace[845413732] 'compare'  (duration: 47.473062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:07:43.320947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.339861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-16T11:07:43.321022Z","caller":"traceutil/trace.go:171","msg":"trace[1372162398] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:1; response_revision:640; }","duration":"257.429414ms","start":"2024-09-16T11:07:43.063585Z","end":"2024-09-16T11:07:43.321014Z","steps":["trace[1372162398] 'agreement among raft nodes before linearized reading'  (duration: 257.097073ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:32.848686Z","caller":"traceutil/trace.go:171","msg":"trace[1433849770] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"176.13666ms","start":"2024-09-16T11:08:32.672526Z","end":"2024-09-16T11:08:32.848663Z","steps":["trace[1433849770] 'process raft request'  (duration: 175.720453ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:10:54.687328Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T11:10:54.687457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	{"level":"warn","ts":"2024-09-16T11:10:54.687629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.687676Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.689450Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.689531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T11:10:54.770633Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4c05646b7156589","current-leader-member-id":"d4c05646b7156589"}
	{"level":"info","ts":"2024-09-16T11:10:54.773137Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:10:54.773277Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:10:54.773343Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	
	
	==> etcd [ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526] <==
	{"level":"info","ts":"2024-09-16T11:12:31.076410Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-09-16T11:12:31.076610Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:31.076674Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:31.083484Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:31.096736Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:31.097022Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:31.097067Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:31.097111Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:12:31.097134Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:12:32.130362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.136512Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-736061 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:32.136525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.136756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.137155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137197Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.138897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-16T11:12:32.139181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:14:13 up 8 min,  0 users,  load average: 0.70, 0.51, 0.25
	Linux multinode-736061 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25] <==
	I0916 11:13:25.685796       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:13:35.682408       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:13:35.682574       1 main.go:299] handling current node
	I0916 11:13:35.682632       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:13:35.682657       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:13:35.683172       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:13:35.683223       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:13:45.685747       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:13:45.685810       1 main.go:299] handling current node
	I0916 11:13:45.685832       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:13:45.685842       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:13:45.686196       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:13:45.686237       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:13:55.681649       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:13:55.681784       1 main.go:299] handling current node
	I0916 11:13:55.681816       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:13:55.681835       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:13:55.681969       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:13:55.681991       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	I0916 11:14:05.688020       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:14:05.688131       1 main.go:299] handling current node
	I0916 11:14:05.688166       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:14:05.688184       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:14:05.688461       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:14:05.688502       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0] <==
	I0916 11:10:10.885622       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:20.882088       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:20.882177       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:20.882351       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:20.882379       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:20.882438       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:20.882445       1 main.go:299] handling current node
	I0916 11:10:30.882343       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:30.882485       1 main.go:299] handling current node
	I0916 11:10:30.882519       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:30.882538       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:30.882705       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:30.882730       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:40.881843       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:40.881966       1 main.go:299] handling current node
	I0916 11:10:40.881993       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:40.882011       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:40.882162       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:40.882241       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:50.885456       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:50.885505       1 main.go:299] handling current node
	I0916 11:10:50.885524       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:50.885530       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:50.885705       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:50.885712       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d] <==
	I0916 11:12:33.498192       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:12:33.501874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:12:33.508959       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:12:33.509043       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:12:33.509776       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:33.509828       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:33.509857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:33.546526       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:12:33.568509       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:33.568599       1 policy_source.go:224] refreshing policies
	I0916 11:12:33.589155       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:12:33.590889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:12:33.590927       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:12:33.591376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:12:33.596733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:12:33.620595       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:33.621748       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:12:34.423228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:35.891543       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:12:36.022725       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:36.049167       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:36.129506       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:36.139653       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:37.024276       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:37.124173       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7] <==
	W0916 11:10:54.717805       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 11:10:54.721617       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0916 11:10:54.721803       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W0916 11:10:54.722189       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0916 11:10:54.722608       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0916 11:10:54.722692       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0916 11:10:54.722807       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0916 11:10:54.722839       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0916 11:10:54.722854       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0916 11:10:54.722888       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0916 11:10:54.722907       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0916 11:10:54.722935       1 establishing_controller.go:92] Shutting down EstablishingController
	I0916 11:10:54.722948       1 naming_controller.go:305] Shutting down NamingConditionController
	I0916 11:10:54.722980       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0916 11:10:54.722994       1 controller.go:170] Shutting down OpenAPI controller
	I0916 11:10:54.723024       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0916 11:10:54.723033       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0916 11:10:54.723049       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0916 11:10:54.723078       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0916 11:10:54.723096       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0916 11:10:54.723124       1 controller.go:132] Ending legacy_token_tracking_controller
	I0916 11:10:54.723131       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0916 11:10:54.723263       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0916 11:10:54.723385       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0916 11:10:54.723607       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4] <==
	I0916 11:13:46.690796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:46.714603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:46.929467       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:13:46.930549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.907188       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:13:47.909491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:13:47.928347       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.2.0/24"]
	I0916 11:13:47.928434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	E0916 11:13:47.943698       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.3.0/24"]
	E0916 11:13:47.943787       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03"
	E0916 11:13:47.943838       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-736061-m03': failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 11:13:47.943877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.949840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.952982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:48.292993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:51.924112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:58.208795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:06.246940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.870268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:10.875842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:10.892575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:11.443344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:11.443755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-controller-manager [d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba] <==
	I0916 11:08:27.068836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:27.299944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:27.299986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.498604       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:08:28.499795       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:28.530214       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.4.0/24"]
	I0916 11:08:28.530257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.530321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.812678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:29.131881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:33.111007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:38.696548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:47.211278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:48.081832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:28.097328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:28.097948       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m03"
	I0916 11:09:28.128518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:28.176986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.051461ms"
	I0916 11:09:28.177686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.301µs"
	I0916 11:09:33.174860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:33.196257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:33.196479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:43.270263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-proxy [2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:12:34.892799       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:12:34.920138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:12:34.920279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:12:34.987651       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:12:34.987713       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:12:34.987739       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:12:34.996924       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:12:34.997221       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:12:34.997234       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:35.007220       1 config.go:199] "Starting service config controller"
	I0916 11:12:35.029098       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:12:35.025409       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:12:35.029156       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:12:35.029162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:12:35.026457       1 config.go:328] "Starting node config controller"
	I0916 11:12:35.029234       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:12:35.130341       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:12:35.130407       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:05:59.852422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:05:59.886836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:05:59.886976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:05:59.944125       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:05:59.944160       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:05:59.944181       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:05:59.947733       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:05:59.948149       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:05:59.948393       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:05:59.949794       1 config.go:199] "Starting service config controller"
	I0916 11:05:59.949862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:05:59.950230       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:05:59.950374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:05:59.950923       1 config.go:328] "Starting node config controller"
	I0916 11:05:59.952219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:06:00.050768       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:06:00.050862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:06:00.052567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d] <==
	I0916 11:12:31.748594       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:12:33.440575       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:12:33.440623       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:12:33.440633       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:12:33.440641       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:12:33.526991       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:12:33.527040       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:33.536502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:12:33.536670       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:12:33.540976       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:12:33.544844       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:12:33.638485       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762] <==
	E0916 11:05:52.226438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.286013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:05:52.286065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.292630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:05:52.292712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.303069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:05:52.303177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.308000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:05:52.308078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.326647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.326746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.367616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:05:52.367800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.407350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:05:52.407398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.423030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:05:52.423081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.501395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.501587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.597443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.597573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.652519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:05:52.652625       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:05:55.090829       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 11:10:54.693272       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 11:12:39 multinode-736061 kubelet[3200]: E0916 11:12:39.954487    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485159953757144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:39 multinode-736061 kubelet[3200]: E0916 11:12:39.954768    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485159953757144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:49 multinode-736061 kubelet[3200]: E0916 11:12:49.957520    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485169957064652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:49 multinode-736061 kubelet[3200]: E0916 11:12:49.957552    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485169957064652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:59 multinode-736061 kubelet[3200]: E0916 11:12:59.962896    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485179961170320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:12:59 multinode-736061 kubelet[3200]: E0916 11:12:59.962920    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485179961170320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:09 multinode-736061 kubelet[3200]: E0916 11:13:09.964855    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485189964245911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:09 multinode-736061 kubelet[3200]: E0916 11:13:09.965529    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485189964245911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:19 multinode-736061 kubelet[3200]: E0916 11:13:19.970568    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485199969707731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:19 multinode-736061 kubelet[3200]: E0916 11:13:19.970611    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485199969707731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:29 multinode-736061 kubelet[3200]: E0916 11:13:29.921439    3200 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:13:29 multinode-736061 kubelet[3200]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:13:29 multinode-736061 kubelet[3200]: E0916 11:13:29.972711    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485209972226101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:29 multinode-736061 kubelet[3200]: E0916 11:13:29.972898    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485209972226101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:39 multinode-736061 kubelet[3200]: E0916 11:13:39.976917    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485219975946051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:39 multinode-736061 kubelet[3200]: E0916 11:13:39.977478    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485219975946051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:49 multinode-736061 kubelet[3200]: E0916 11:13:49.980692    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485229980248757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:49 multinode-736061 kubelet[3200]: E0916 11:13:49.980723    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485229980248757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:59 multinode-736061 kubelet[3200]: E0916 11:13:59.982354    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485239981881362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:13:59 multinode-736061 kubelet[3200]: E0916 11:13:59.982789    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485239981881362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:14:09 multinode-736061 kubelet[3200]: E0916 11:14:09.986438    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249985987622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:14:09 multinode-736061 kubelet[3200]: E0916 11:14:09.986463    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485249985987622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:14:12.765191   41486 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-736061 -n multinode-736061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (452.162µs)
helpers_test.go:263: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/DeleteNode (4.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 stop
E0916 11:14:31.345286   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:15:08.820900   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736061 stop: exit status 82 (2m0.456308509s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-736061-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-736061 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status
E0916 11:16:28.278150   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736061 status: exit status 3 (18.87988017s)

                                                
                                                
-- stdout --
	multinode-736061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-736061-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:16:33.849499   41947 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0916 11:16:33.849532   41947 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-736061 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-736061 -n multinode-736061
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 logs -n 25: (1.489610856s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m02_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m03 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp testdata/cp-test.txt                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m03_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02:/home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m02 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-736061 node stop m03                                                          | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| node    | multinode-736061 node start                                                             | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| stop    | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| start   | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC |                     |
	| node    | multinode-736061 node delete                                                            | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-736061 stop                                                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:10:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:10:53.764405   40135 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:10:53.764697   40135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:53.764708   40135 out.go:358] Setting ErrFile to fd 2...
	I0916 11:10:53.764714   40135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:53.764934   40135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:10:53.765527   40135 out.go:352] Setting JSON to false
	I0916 11:10:53.766415   40135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3204,"bootTime":1726481850,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:10:53.766501   40135 start.go:139] virtualization: kvm guest
	I0916 11:10:53.768975   40135 out.go:177] * [multinode-736061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:10:53.770599   40135 notify.go:220] Checking for updates...
	I0916 11:10:53.770619   40135 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:10:53.772102   40135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:10:53.773841   40135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:10:53.775207   40135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:10:53.776414   40135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:10:53.777635   40135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:10:53.779515   40135 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:10:53.779637   40135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:10:53.780265   40135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:10:53.780320   40135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:53.800988   40135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
	I0916 11:10:53.801446   40135 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:53.801971   40135 main.go:141] libmachine: Using API Version  1
	I0916 11:10:53.801999   40135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:53.802338   40135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:53.802498   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.837831   40135 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 11:10:53.839032   40135 start.go:297] selected driver: kvm2
	I0916 11:10:53.839047   40135 start.go:901] validating driver "kvm2" against &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:53.839202   40135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:10:53.839496   40135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:53.839555   40135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:10:53.854668   40135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:10:53.855622   40135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:53.855664   40135 cni.go:84] Creating CNI manager for ""
	I0916 11:10:53.855731   40135 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 11:10:53.855806   40135 start.go:340] cluster config:
	{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:53.856022   40135 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:53.857966   40135 out.go:177] * Starting "multinode-736061" primary control-plane node in "multinode-736061" cluster
	I0916 11:10:53.859309   40135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:10:53.859342   40135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:10:53.859351   40135 cache.go:56] Caching tarball of preloaded images
	I0916 11:10:53.859419   40135 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:10:53.859428   40135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:10:53.859533   40135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:10:53.859726   40135 start.go:360] acquireMachinesLock for multinode-736061: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:10:53.859765   40135 start.go:364] duration metric: took 22.859µs to acquireMachinesLock for "multinode-736061"
	I0916 11:10:53.859779   40135 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:10:53.859786   40135 fix.go:54] fixHost starting: 
	I0916 11:10:53.860046   40135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:10:53.860077   40135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:53.874501   40135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0916 11:10:53.874913   40135 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:53.875410   40135 main.go:141] libmachine: Using API Version  1
	I0916 11:10:53.875431   40135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:53.875784   40135 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:53.876057   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.876221   40135 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:10:53.877667   40135 fix.go:112] recreateIfNeeded on multinode-736061: state=Running err=<nil>
	W0916 11:10:53.877684   40135 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:10:53.880136   40135 out.go:177] * Updating the running kvm2 "multinode-736061" VM ...
	I0916 11:10:53.881210   40135 machine.go:93] provisionDockerMachine start ...
	I0916 11:10:53.881232   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:10:53.881421   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:53.883804   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:53.884294   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:53.884322   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:53.884407   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:53.884550   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:53.884689   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:53.884816   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:53.884984   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:53.885237   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:53.885252   40135 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:10:54.002517   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:10:54.002554   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.002793   40135 buildroot.go:166] provisioning hostname "multinode-736061"
	I0916 11:10:54.002819   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.003040   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.006032   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.006431   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.006466   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.006567   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.006771   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.006940   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.007101   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.007282   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.007489   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.007510   40135 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061 && echo "multinode-736061" | sudo tee /etc/hostname
	I0916 11:10:54.134028   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:10:54.134063   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.136916   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.137328   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.137354   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.137561   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.137782   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.137967   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.138136   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.138312   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.138554   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.138581   40135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:10:54.254218   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:10:54.254244   40135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:10:54.254262   40135 buildroot.go:174] setting up certificates
	I0916 11:10:54.254271   40135 provision.go:84] configureAuth start
	I0916 11:10:54.254279   40135 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:10:54.254544   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:10:54.256878   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.257288   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.257330   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.257423   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.259620   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.259953   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.259972   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.260142   40135 provision.go:143] copyHostCerts
	I0916 11:10:54.260180   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:10:54.260205   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:10:54.260213   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:10:54.260282   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:10:54.260354   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:10:54.260374   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:10:54.260383   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:10:54.260419   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:10:54.260483   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:10:54.260506   40135 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:10:54.260513   40135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:10:54.260536   40135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:10:54.260618   40135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-736061]
	I0916 11:10:54.392345   40135 provision.go:177] copyRemoteCerts
	I0916 11:10:54.392409   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:10:54.392437   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.394792   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.395075   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.395103   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.395239   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.395432   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.395580   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.395718   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:10:54.480886   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:10:54.480971   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:10:54.507550   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:10:54.507629   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:10:54.534283   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:10:54.534359   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:10:54.560933   40135 provision.go:87] duration metric: took 306.650302ms to configureAuth
	I0916 11:10:54.560963   40135 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:10:54.561214   40135 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:10:54.561286   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:10:54.564044   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.564377   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:10:54.564402   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:10:54.564575   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:10:54.564740   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.564908   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:10:54.565050   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:10:54.565204   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:54.565427   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:10:54.565450   40135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:12:25.365214   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:12:25.365240   40135 machine.go:96] duration metric: took 1m31.484014406s to provisionDockerMachine
	I0916 11:12:25.365255   40135 start.go:293] postStartSetup for "multinode-736061" (driver="kvm2")
	I0916 11:12:25.365269   40135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:25.365291   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.365801   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:25.365839   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.369181   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.369666   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.369698   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.369949   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.370163   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.370371   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.370519   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.457301   40135 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:25.461731   40135 command_runner.go:130] > NAME=Buildroot
	I0916 11:12:25.461752   40135 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:12:25.461757   40135 command_runner.go:130] > ID=buildroot
	I0916 11:12:25.461762   40135 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:12:25.461767   40135 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:12:25.461812   40135 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:12:25.461826   40135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:12:25.461899   40135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:12:25.461981   40135 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:12:25.461992   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:12:25.462072   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:25.472346   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:12:25.497363   40135 start.go:296] duration metric: took 132.094435ms for postStartSetup
	I0916 11:12:25.497437   40135 fix.go:56] duration metric: took 1m31.637627262s for fixHost
	I0916 11:12:25.497463   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.500226   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.500581   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.500610   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.500790   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.500971   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.501144   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.501372   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.501535   40135 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:25.501715   40135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:12:25.501724   40135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:12:25.609971   40135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726485145.588914028
	
	I0916 11:12:25.609991   40135 fix.go:216] guest clock: 1726485145.588914028
	I0916 11:12:25.609998   40135 fix.go:229] Guest: 2024-09-16 11:12:25.588914028 +0000 UTC Remote: 2024-09-16 11:12:25.497444489 +0000 UTC m=+91.767542385 (delta=91.469539ms)
	I0916 11:12:25.610017   40135 fix.go:200] guest clock delta is within tolerance: 91.469539ms
	I0916 11:12:25.610022   40135 start.go:83] releasing machines lock for "multinode-736061", held for 1m31.750248345s
	I0916 11:12:25.610039   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.610285   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:12:25.613333   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.613834   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.613871   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.614019   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614475   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614637   40135 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:12:25.614712   40135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:25.614767   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.614820   40135 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:25.614838   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:12:25.617271   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617637   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617681   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.617697   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.617822   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.617976   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.618123   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.618147   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:25.618163   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:25.618311   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:12:25.618338   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.618453   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:12:25.618578   40135 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:12:25.618694   40135 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:12:25.726440   40135 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:12:25.727099   40135 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 11:12:25.727256   40135 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:25.733715   40135 command_runner.go:130] > systemd 252 (252)
	I0916 11:12:25.733759   40135 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 11:12:25.733826   40135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:12:25.889015   40135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:25.896686   40135 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:12:25.897147   40135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:12:25.897213   40135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:25.906774   40135 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:12:25.906798   40135 start.go:495] detecting cgroup driver to use...
	I0916 11:12:25.906866   40135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:12:25.924150   40135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:12:25.938696   40135 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:25.938749   40135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:25.952927   40135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:25.967295   40135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:26.111243   40135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:26.252238   40135 docker.go:233] disabling docker service ...
	I0916 11:12:26.252310   40135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:26.269485   40135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:26.283580   40135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:26.423452   40135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:26.564033   40135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:26.578149   40135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:26.597842   40135 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:12:26.597888   40135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:12:26.597941   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.608772   40135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:12:26.608829   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.620194   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.631946   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.642904   40135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:26.653934   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.664685   40135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.676602   40135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:12:26.687924   40135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:26.698235   40135 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 11:12:26.698315   40135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:26.708091   40135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:26.843091   40135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:12:27.073301   40135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:12:27.073360   40135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:12:27.078455   40135 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:12:27.078472   40135 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:12:27.078478   40135 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0916 11:12:27.078485   40135 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:12:27.078490   40135 command_runner.go:130] > Access: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078504   40135 command_runner.go:130] > Modify: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078510   40135 command_runner.go:130] > Change: 2024-09-16 11:12:26.940714941 +0000
	I0916 11:12:27.078517   40135 command_runner.go:130] >  Birth: -
	I0916 11:12:27.078806   40135 start.go:563] Will wait 60s for crictl version
	I0916 11:12:27.078852   40135 ssh_runner.go:195] Run: which crictl
	I0916 11:12:27.082760   40135 command_runner.go:130] > /usr/bin/crictl
	I0916 11:12:27.082812   40135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:27.121054   40135 command_runner.go:130] > Version:  0.1.0
	I0916 11:12:27.121076   40135 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:12:27.121081   40135 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:12:27.121086   40135 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:12:27.121338   40135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:12:27.121408   40135 ssh_runner.go:195] Run: crio --version
	I0916 11:12:27.151162   40135 command_runner.go:130] > crio version 1.29.1
	I0916 11:12:27.151185   40135 command_runner.go:130] > Version:        1.29.1
	I0916 11:12:27.151194   40135 command_runner.go:130] > GitCommit:      unknown
	I0916 11:12:27.151201   40135 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:12:27.151206   40135 command_runner.go:130] > GitTreeState:   clean
	I0916 11:12:27.151214   40135 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:12:27.151221   40135 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:12:27.151227   40135 command_runner.go:130] > Compiler:       gc
	I0916 11:12:27.151233   40135 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:12:27.151239   40135 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:12:27.151249   40135 command_runner.go:130] > BuildTags:      
	I0916 11:12:27.151260   40135 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:12:27.151266   40135 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:12:27.151273   40135 command_runner.go:130] >   btrfs_noversion
	I0916 11:12:27.151280   40135 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:12:27.151289   40135 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:12:27.151295   40135 command_runner.go:130] >   seccomp
	I0916 11:12:27.151304   40135 command_runner.go:130] > LDFlags:          unknown
	I0916 11:12:27.151310   40135 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:12:27.151321   40135 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:12:27.151405   40135 ssh_runner.go:195] Run: crio --version
	I0916 11:12:27.181636   40135 command_runner.go:130] > crio version 1.29.1
	I0916 11:12:27.181664   40135 command_runner.go:130] > Version:        1.29.1
	I0916 11:12:27.181673   40135 command_runner.go:130] > GitCommit:      unknown
	I0916 11:12:27.181679   40135 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:12:27.181687   40135 command_runner.go:130] > GitTreeState:   clean
	I0916 11:12:27.181696   40135 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:12:27.181702   40135 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:12:27.181708   40135 command_runner.go:130] > Compiler:       gc
	I0916 11:12:27.181715   40135 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:12:27.181722   40135 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:12:27.181728   40135 command_runner.go:130] > BuildTags:      
	I0916 11:12:27.181736   40135 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:12:27.181742   40135 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:12:27.181752   40135 command_runner.go:130] >   btrfs_noversion
	I0916 11:12:27.181763   40135 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:12:27.181770   40135 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:12:27.181778   40135 command_runner.go:130] >   seccomp
	I0916 11:12:27.181786   40135 command_runner.go:130] > LDFlags:          unknown
	I0916 11:12:27.181796   40135 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:12:27.181802   40135 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:12:27.183887   40135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:12:27.185243   40135 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:12:27.187794   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:27.188123   40135 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:12:27.188146   40135 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:12:27.188367   40135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:27.192571   40135 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 11:12:27.192739   40135 kubeadm.go:883] updating cluster {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:27.192900   40135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:12:27.192958   40135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:27.238779   40135 command_runner.go:130] > {
	I0916 11:12:27.238813   40135 command_runner.go:130] >   "images": [
	I0916 11:12:27.238818   40135 command_runner.go:130] >     {
	I0916 11:12:27.238825   40135 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:12:27.238830   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.238836   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:12:27.238839   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238844   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.238852   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:12:27.238859   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:12:27.238863   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238870   40135 command_runner.go:130] >       "size": "87190579",
	I0916 11:12:27.238877   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.238884   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.238893   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.238907   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.238911   40135 command_runner.go:130] >     },
	I0916 11:12:27.238915   40135 command_runner.go:130] >     {
	I0916 11:12:27.238921   40135 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:12:27.238926   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.238931   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:12:27.238935   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238939   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.238947   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:12:27.238958   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:12:27.238969   40135 command_runner.go:130] >       ],
	I0916 11:12:27.238976   40135 command_runner.go:130] >       "size": "1363676",
	I0916 11:12:27.238982   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.238991   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239000   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239006   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239012   40135 command_runner.go:130] >     },
	I0916 11:12:27.239019   40135 command_runner.go:130] >     {
	I0916 11:12:27.239025   40135 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:12:27.239029   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239034   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:12:27.239041   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239047   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239063   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:12:27.239078   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:12:27.239087   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239093   40135 command_runner.go:130] >       "size": "31470524",
	I0916 11:12:27.239103   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239109   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239116   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239121   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239129   40135 command_runner.go:130] >     },
	I0916 11:12:27.239135   40135 command_runner.go:130] >     {
	I0916 11:12:27.239149   40135 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:12:27.239158   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239168   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:12:27.239176   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239183   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239196   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:12:27.239213   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:12:27.239222   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239229   40135 command_runner.go:130] >       "size": "63273227",
	I0916 11:12:27.239238   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239245   40135 command_runner.go:130] >       "username": "nonroot",
	I0916 11:12:27.239254   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239264   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239272   40135 command_runner.go:130] >     },
	I0916 11:12:27.239277   40135 command_runner.go:130] >     {
	I0916 11:12:27.239286   40135 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:12:27.239291   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239300   40135 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:12:27.239309   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239316   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239329   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:12:27.239343   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:12:27.239351   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239358   40135 command_runner.go:130] >       "size": "149009664",
	I0916 11:12:27.239366   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239370   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239375   40135 command_runner.go:130] >       },
	I0916 11:12:27.239381   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239390   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239397   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239404   40135 command_runner.go:130] >     },
	I0916 11:12:27.239409   40135 command_runner.go:130] >     {
	I0916 11:12:27.239420   40135 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:12:27.239430   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239438   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:12:27.239447   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239452   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239463   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:12:27.239475   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:12:27.239484   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239493   40135 command_runner.go:130] >       "size": "95237600",
	I0916 11:12:27.239502   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239508   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239516   40135 command_runner.go:130] >       },
	I0916 11:12:27.239524   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239532   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239538   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239545   40135 command_runner.go:130] >     },
	I0916 11:12:27.239550   40135 command_runner.go:130] >     {
	I0916 11:12:27.239562   40135 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:12:27.239571   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239580   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:12:27.239589   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239596   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239611   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:12:27.239627   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:12:27.239635   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239639   40135 command_runner.go:130] >       "size": "89437508",
	I0916 11:12:27.239644   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239651   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239658   40135 command_runner.go:130] >       },
	I0916 11:12:27.239665   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239674   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239681   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239689   40135 command_runner.go:130] >     },
	I0916 11:12:27.239695   40135 command_runner.go:130] >     {
	I0916 11:12:27.239709   40135 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:12:27.239716   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239724   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:12:27.239728   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239735   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239758   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:12:27.239773   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:12:27.239779   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239790   40135 command_runner.go:130] >       "size": "92733849",
	I0916 11:12:27.239799   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.239806   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239810   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239815   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239822   40135 command_runner.go:130] >     },
	I0916 11:12:27.239826   40135 command_runner.go:130] >     {
	I0916 11:12:27.239836   40135 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:12:27.239842   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239848   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:12:27.239854   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239860   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.239871   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:12:27.239883   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:12:27.239889   40135 command_runner.go:130] >       ],
	I0916 11:12:27.239895   40135 command_runner.go:130] >       "size": "68420934",
	I0916 11:12:27.239904   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.239910   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.239918   40135 command_runner.go:130] >       },
	I0916 11:12:27.239922   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.239928   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.239937   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.239946   40135 command_runner.go:130] >     },
	I0916 11:12:27.239954   40135 command_runner.go:130] >     {
	I0916 11:12:27.239967   40135 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:12:27.239978   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.239988   40135 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:12:27.239997   40135 command_runner.go:130] >       ],
	I0916 11:12:27.240004   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.240013   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:12:27.240027   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:12:27.240036   40135 command_runner.go:130] >       ],
	I0916 11:12:27.240046   40135 command_runner.go:130] >       "size": "742080",
	I0916 11:12:27.240054   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.240063   40135 command_runner.go:130] >         "value": "65535"
	I0916 11:12:27.240071   40135 command_runner.go:130] >       },
	I0916 11:12:27.240079   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.240087   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.240091   40135 command_runner.go:130] >       "pinned": true
	I0916 11:12:27.240097   40135 command_runner.go:130] >     }
	I0916 11:12:27.240102   40135 command_runner.go:130] >   ]
	I0916 11:12:27.240109   40135 command_runner.go:130] > }
	I0916 11:12:27.240330   40135 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:12:27.240345   40135 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:12:27.240399   40135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:27.285112   40135 command_runner.go:130] > {
	I0916 11:12:27.285150   40135 command_runner.go:130] >   "images": [
	I0916 11:12:27.285157   40135 command_runner.go:130] >     {
	I0916 11:12:27.285170   40135 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:12:27.285177   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285185   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:12:27.285190   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285197   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285211   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:12:27.285224   40135 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:12:27.285229   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285240   40135 command_runner.go:130] >       "size": "87190579",
	I0916 11:12:27.285250   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285257   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285271   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285279   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285283   40135 command_runner.go:130] >     },
	I0916 11:12:27.285288   40135 command_runner.go:130] >     {
	I0916 11:12:27.285301   40135 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:12:27.285308   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285319   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:12:27.285331   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285341   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285356   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:12:27.285367   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:12:27.285374   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285381   40135 command_runner.go:130] >       "size": "1363676",
	I0916 11:12:27.285389   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285399   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285407   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285414   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285423   40135 command_runner.go:130] >     },
	I0916 11:12:27.285428   40135 command_runner.go:130] >     {
	I0916 11:12:27.285441   40135 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:12:27.285450   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285460   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:12:27.285467   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285472   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285480   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:12:27.285490   40135 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:12:27.285496   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285500   40135 command_runner.go:130] >       "size": "31470524",
	I0916 11:12:27.285506   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285510   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285515   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285521   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285524   40135 command_runner.go:130] >     },
	I0916 11:12:27.285528   40135 command_runner.go:130] >     {
	I0916 11:12:27.285534   40135 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:12:27.285540   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285547   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:12:27.285552   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285556   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285563   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:12:27.285577   40135 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:12:27.285582   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285586   40135 command_runner.go:130] >       "size": "63273227",
	I0916 11:12:27.285591   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285596   40135 command_runner.go:130] >       "username": "nonroot",
	I0916 11:12:27.285602   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285606   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285610   40135 command_runner.go:130] >     },
	I0916 11:12:27.285613   40135 command_runner.go:130] >     {
	I0916 11:12:27.285619   40135 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:12:27.285624   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285628   40135 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:12:27.285631   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285635   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285644   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:12:27.285651   40135 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:12:27.285656   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285661   40135 command_runner.go:130] >       "size": "149009664",
	I0916 11:12:27.285664   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285668   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285671   40135 command_runner.go:130] >       },
	I0916 11:12:27.285675   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285680   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285685   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285689   40135 command_runner.go:130] >     },
	I0916 11:12:27.285692   40135 command_runner.go:130] >     {
	I0916 11:12:27.285698   40135 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:12:27.285704   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285709   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:12:27.285712   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285716   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285723   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:12:27.285731   40135 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:12:27.285737   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285741   40135 command_runner.go:130] >       "size": "95237600",
	I0916 11:12:27.285745   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285749   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285752   40135 command_runner.go:130] >       },
	I0916 11:12:27.285756   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285760   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285764   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285767   40135 command_runner.go:130] >     },
	I0916 11:12:27.285771   40135 command_runner.go:130] >     {
	I0916 11:12:27.285777   40135 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:12:27.285781   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285787   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:12:27.285796   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285800   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285808   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:12:27.285816   40135 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:12:27.285821   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285825   40135 command_runner.go:130] >       "size": "89437508",
	I0916 11:12:27.285829   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.285835   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.285839   40135 command_runner.go:130] >       },
	I0916 11:12:27.285843   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285847   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285851   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285854   40135 command_runner.go:130] >     },
	I0916 11:12:27.285857   40135 command_runner.go:130] >     {
	I0916 11:12:27.285865   40135 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:12:27.285869   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285875   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:12:27.285878   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285882   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285904   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:12:27.285914   40135 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:12:27.285918   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285923   40135 command_runner.go:130] >       "size": "92733849",
	I0916 11:12:27.285926   40135 command_runner.go:130] >       "uid": null,
	I0916 11:12:27.285930   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.285934   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.285938   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.285941   40135 command_runner.go:130] >     },
	I0916 11:12:27.285944   40135 command_runner.go:130] >     {
	I0916 11:12:27.285951   40135 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:12:27.285956   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.285961   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:12:27.285964   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285968   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.285975   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:12:27.285984   40135 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:12:27.285987   40135 command_runner.go:130] >       ],
	I0916 11:12:27.285992   40135 command_runner.go:130] >       "size": "68420934",
	I0916 11:12:27.285998   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.286002   40135 command_runner.go:130] >         "value": "0"
	I0916 11:12:27.286005   40135 command_runner.go:130] >       },
	I0916 11:12:27.286009   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.286013   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.286017   40135 command_runner.go:130] >       "pinned": false
	I0916 11:12:27.286022   40135 command_runner.go:130] >     },
	I0916 11:12:27.286027   40135 command_runner.go:130] >     {
	I0916 11:12:27.286033   40135 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:12:27.286040   40135 command_runner.go:130] >       "repoTags": [
	I0916 11:12:27.286044   40135 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:12:27.286050   40135 command_runner.go:130] >       ],
	I0916 11:12:27.286054   40135 command_runner.go:130] >       "repoDigests": [
	I0916 11:12:27.286061   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:12:27.286069   40135 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:12:27.286074   40135 command_runner.go:130] >       ],
	I0916 11:12:27.286080   40135 command_runner.go:130] >       "size": "742080",
	I0916 11:12:27.286084   40135 command_runner.go:130] >       "uid": {
	I0916 11:12:27.286090   40135 command_runner.go:130] >         "value": "65535"
	I0916 11:12:27.286094   40135 command_runner.go:130] >       },
	I0916 11:12:27.286098   40135 command_runner.go:130] >       "username": "",
	I0916 11:12:27.286101   40135 command_runner.go:130] >       "spec": null,
	I0916 11:12:27.286107   40135 command_runner.go:130] >       "pinned": true
	I0916 11:12:27.286111   40135 command_runner.go:130] >     }
	I0916 11:12:27.286114   40135 command_runner.go:130] >   ]
	I0916 11:12:27.286117   40135 command_runner.go:130] > }
	I0916 11:12:27.286227   40135 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:12:27.286237   40135 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:27.286244   40135 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 11:12:27.286331   40135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:27.286392   40135 ssh_runner.go:195] Run: crio config
	I0916 11:12:27.326001   40135 command_runner.go:130] ! time="2024-09-16 11:12:27.304932753Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 11:12:27.332712   40135 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 11:12:27.346533   40135 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 11:12:27.346557   40135 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 11:12:27.346564   40135 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 11:12:27.346567   40135 command_runner.go:130] > #
	I0916 11:12:27.346573   40135 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 11:12:27.346580   40135 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 11:12:27.346585   40135 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 11:12:27.346594   40135 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 11:12:27.346599   40135 command_runner.go:130] > # reload'.
	I0916 11:12:27.346605   40135 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 11:12:27.346611   40135 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 11:12:27.346617   40135 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 11:12:27.346625   40135 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 11:12:27.346629   40135 command_runner.go:130] > [crio]
	I0916 11:12:27.346634   40135 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 11:12:27.346641   40135 command_runner.go:130] > # containers images, in this directory.
	I0916 11:12:27.346646   40135 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 11:12:27.346655   40135 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 11:12:27.346674   40135 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 11:12:27.346683   40135 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 11:12:27.346690   40135 command_runner.go:130] > # imagestore = ""
	I0916 11:12:27.346696   40135 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 11:12:27.346705   40135 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 11:12:27.346710   40135 command_runner.go:130] > storage_driver = "overlay"
	I0916 11:12:27.346716   40135 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 11:12:27.346723   40135 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 11:12:27.346730   40135 command_runner.go:130] > storage_option = [
	I0916 11:12:27.346736   40135 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 11:12:27.346742   40135 command_runner.go:130] > ]
	I0916 11:12:27.346748   40135 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 11:12:27.346756   40135 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 11:12:27.346762   40135 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 11:12:27.346769   40135 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 11:12:27.346775   40135 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 11:12:27.346782   40135 command_runner.go:130] > # always happen on a node reboot
	I0916 11:12:27.346787   40135 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 11:12:27.346797   40135 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 11:12:27.346805   40135 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 11:12:27.346811   40135 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 11:12:27.346818   40135 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 11:12:27.346825   40135 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 11:12:27.346834   40135 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 11:12:27.346840   40135 command_runner.go:130] > # internal_wipe = true
	I0916 11:12:27.346849   40135 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 11:12:27.346856   40135 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 11:12:27.346863   40135 command_runner.go:130] > # internal_repair = false
	I0916 11:12:27.346874   40135 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 11:12:27.346883   40135 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 11:12:27.346890   40135 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 11:12:27.346897   40135 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 11:12:27.346904   40135 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 11:12:27.346909   40135 command_runner.go:130] > [crio.api]
	I0916 11:12:27.346915   40135 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 11:12:27.346921   40135 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 11:12:27.346927   40135 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 11:12:27.346933   40135 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 11:12:27.346940   40135 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 11:12:27.346947   40135 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 11:12:27.346951   40135 command_runner.go:130] > # stream_port = "0"
	I0916 11:12:27.346957   40135 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 11:12:27.346964   40135 command_runner.go:130] > # stream_enable_tls = false
	I0916 11:12:27.346970   40135 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 11:12:27.346976   40135 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 11:12:27.346982   40135 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 11:12:27.346990   40135 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 11:12:27.346995   40135 command_runner.go:130] > # minutes.
	I0916 11:12:27.346999   40135 command_runner.go:130] > # stream_tls_cert = ""
	I0916 11:12:27.347007   40135 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 11:12:27.347015   40135 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 11:12:27.347021   40135 command_runner.go:130] > # stream_tls_key = ""
	I0916 11:12:27.347026   40135 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 11:12:27.347034   40135 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 11:12:27.347049   40135 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 11:12:27.347055   40135 command_runner.go:130] > # stream_tls_ca = ""
	I0916 11:12:27.347065   40135 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:12:27.347071   40135 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 11:12:27.347078   40135 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:12:27.347085   40135 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 11:12:27.347091   40135 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 11:12:27.347099   40135 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 11:12:27.347105   40135 command_runner.go:130] > [crio.runtime]
	I0916 11:12:27.347111   40135 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 11:12:27.347118   40135 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 11:12:27.347124   40135 command_runner.go:130] > # "nofile=1024:2048"
	I0916 11:12:27.347130   40135 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 11:12:27.347135   40135 command_runner.go:130] > # default_ulimits = [
	I0916 11:12:27.347139   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347144   40135 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 11:12:27.347150   40135 command_runner.go:130] > # no_pivot = false
	I0916 11:12:27.347156   40135 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 11:12:27.347164   40135 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 11:12:27.347171   40135 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 11:12:27.347177   40135 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 11:12:27.347184   40135 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 11:12:27.347194   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:12:27.347200   40135 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 11:12:27.347205   40135 command_runner.go:130] > # Cgroup setting for conmon
	I0916 11:12:27.347214   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 11:12:27.347219   40135 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 11:12:27.347225   40135 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 11:12:27.347234   40135 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 11:12:27.347242   40135 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:12:27.347247   40135 command_runner.go:130] > conmon_env = [
	I0916 11:12:27.347253   40135 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:12:27.347258   40135 command_runner.go:130] > ]
	I0916 11:12:27.347263   40135 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 11:12:27.347270   40135 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 11:12:27.347276   40135 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 11:12:27.347282   40135 command_runner.go:130] > # default_env = [
	I0916 11:12:27.347285   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347293   40135 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 11:12:27.347300   40135 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 11:12:27.347306   40135 command_runner.go:130] > # selinux = false
	I0916 11:12:27.347312   40135 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 11:12:27.347320   40135 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 11:12:27.347328   40135 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 11:12:27.347332   40135 command_runner.go:130] > # seccomp_profile = ""
	I0916 11:12:27.347340   40135 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 11:12:27.347345   40135 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 11:12:27.347353   40135 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 11:12:27.347358   40135 command_runner.go:130] > # which might increase security.
	I0916 11:12:27.347363   40135 command_runner.go:130] > # This option is currently deprecated,
	I0916 11:12:27.347370   40135 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 11:12:27.347375   40135 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 11:12:27.347383   40135 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 11:12:27.347391   40135 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 11:12:27.347399   40135 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 11:12:27.347407   40135 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 11:12:27.347414   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.347419   40135 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 11:12:27.347426   40135 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 11:12:27.347430   40135 command_runner.go:130] > # the cgroup blockio controller.
	I0916 11:12:27.347435   40135 command_runner.go:130] > # blockio_config_file = ""
	I0916 11:12:27.347441   40135 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 11:12:27.347446   40135 command_runner.go:130] > # blockio parameters.
	I0916 11:12:27.347450   40135 command_runner.go:130] > # blockio_reload = false
	I0916 11:12:27.347458   40135 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 11:12:27.347466   40135 command_runner.go:130] > # irqbalance daemon.
	I0916 11:12:27.347470   40135 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 11:12:27.347478   40135 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 11:12:27.347488   40135 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 11:12:27.347497   40135 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 11:12:27.347503   40135 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 11:12:27.347511   40135 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 11:12:27.347517   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.347523   40135 command_runner.go:130] > # rdt_config_file = ""
	I0916 11:12:27.347528   40135 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 11:12:27.347535   40135 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 11:12:27.347550   40135 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 11:12:27.347556   40135 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 11:12:27.347562   40135 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 11:12:27.347568   40135 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 11:12:27.347574   40135 command_runner.go:130] > # will be added.
	I0916 11:12:27.347578   40135 command_runner.go:130] > # default_capabilities = [
	I0916 11:12:27.347583   40135 command_runner.go:130] > # 	"CHOWN",
	I0916 11:12:27.347588   40135 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 11:12:27.347594   40135 command_runner.go:130] > # 	"FSETID",
	I0916 11:12:27.347597   40135 command_runner.go:130] > # 	"FOWNER",
	I0916 11:12:27.347603   40135 command_runner.go:130] > # 	"SETGID",
	I0916 11:12:27.347607   40135 command_runner.go:130] > # 	"SETUID",
	I0916 11:12:27.347613   40135 command_runner.go:130] > # 	"SETPCAP",
	I0916 11:12:27.347617   40135 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 11:12:27.347621   40135 command_runner.go:130] > # 	"KILL",
	I0916 11:12:27.347624   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347632   40135 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 11:12:27.347640   40135 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 11:12:27.347645   40135 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 11:12:27.347653   40135 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 11:12:27.347659   40135 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:12:27.347665   40135 command_runner.go:130] > default_sysctls = [
	I0916 11:12:27.347669   40135 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 11:12:27.347673   40135 command_runner.go:130] > ]
	I0916 11:12:27.347677   40135 command_runner.go:130] > # List of devices on the host that a
	I0916 11:12:27.347684   40135 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 11:12:27.347688   40135 command_runner.go:130] > # allowed_devices = [
	I0916 11:12:27.347694   40135 command_runner.go:130] > # 	"/dev/fuse",
	I0916 11:12:27.347697   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347705   40135 command_runner.go:130] > # List of additional devices. specified as
	I0916 11:12:27.347712   40135 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 11:12:27.347719   40135 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 11:12:27.347724   40135 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:12:27.347731   40135 command_runner.go:130] > # additional_devices = [
	I0916 11:12:27.347734   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347741   40135 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 11:12:27.347747   40135 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 11:12:27.347751   40135 command_runner.go:130] > # 	"/etc/cdi",
	I0916 11:12:27.347757   40135 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 11:12:27.347761   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347769   40135 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 11:12:27.347777   40135 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 11:12:27.347784   40135 command_runner.go:130] > # Defaults to false.
	I0916 11:12:27.347789   40135 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 11:12:27.347798   40135 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 11:12:27.347806   40135 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 11:12:27.347811   40135 command_runner.go:130] > # hooks_dir = [
	I0916 11:12:27.347816   40135 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 11:12:27.347821   40135 command_runner.go:130] > # ]
	I0916 11:12:27.347827   40135 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 11:12:27.347835   40135 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 11:12:27.347840   40135 command_runner.go:130] > # its default mounts from the following two files:
	I0916 11:12:27.347843   40135 command_runner.go:130] > #
	I0916 11:12:27.347851   40135 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 11:12:27.347858   40135 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 11:12:27.347865   40135 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 11:12:27.347868   40135 command_runner.go:130] > #
	I0916 11:12:27.347881   40135 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 11:12:27.347887   40135 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 11:12:27.347895   40135 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 11:12:27.347902   40135 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 11:12:27.347905   40135 command_runner.go:130] > #
	I0916 11:12:27.347912   40135 command_runner.go:130] > # default_mounts_file = ""
	I0916 11:12:27.347917   40135 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 11:12:27.347925   40135 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 11:12:27.347931   40135 command_runner.go:130] > pids_limit = 1024
	I0916 11:12:27.347937   40135 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 11:12:27.347945   40135 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 11:12:27.347954   40135 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 11:12:27.347962   40135 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 11:12:27.347968   40135 command_runner.go:130] > # log_size_max = -1
	I0916 11:12:27.347975   40135 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 11:12:27.347981   40135 command_runner.go:130] > # log_to_journald = false
	I0916 11:12:27.347987   40135 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 11:12:27.347994   40135 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 11:12:27.347999   40135 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 11:12:27.348006   40135 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 11:12:27.348012   40135 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 11:12:27.348018   40135 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 11:12:27.348024   40135 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 11:12:27.348030   40135 command_runner.go:130] > # read_only = false
	I0916 11:12:27.348036   40135 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 11:12:27.348044   40135 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 11:12:27.348050   40135 command_runner.go:130] > # live configuration reload.
	I0916 11:12:27.348054   40135 command_runner.go:130] > # log_level = "info"
	I0916 11:12:27.348062   40135 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 11:12:27.348068   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.348073   40135 command_runner.go:130] > # log_filter = ""
	I0916 11:12:27.348079   40135 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 11:12:27.348087   40135 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 11:12:27.348093   40135 command_runner.go:130] > # separated by comma.
	I0916 11:12:27.348100   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348106   40135 command_runner.go:130] > # uid_mappings = ""
	I0916 11:12:27.348112   40135 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 11:12:27.348118   40135 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 11:12:27.348124   40135 command_runner.go:130] > # separated by comma.
	I0916 11:12:27.348132   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348138   40135 command_runner.go:130] > # gid_mappings = ""
	I0916 11:12:27.348144   40135 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 11:12:27.348152   40135 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:12:27.348158   40135 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:12:27.348168   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348175   40135 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 11:12:27.348181   40135 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 11:12:27.348189   40135 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:12:27.348197   40135 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:12:27.348204   40135 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:12:27.348210   40135 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 11:12:27.348216   40135 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 11:12:27.348224   40135 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 11:12:27.348230   40135 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 11:12:27.348237   40135 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 11:12:27.348243   40135 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 11:12:27.348250   40135 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 11:12:27.348257   40135 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 11:12:27.348262   40135 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 11:12:27.348268   40135 command_runner.go:130] > drop_infra_ctr = false
	I0916 11:12:27.348274   40135 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 11:12:27.348281   40135 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 11:12:27.348288   40135 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 11:12:27.348294   40135 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 11:12:27.348301   40135 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 11:12:27.348308   40135 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 11:12:27.348314   40135 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 11:12:27.348321   40135 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 11:12:27.348324   40135 command_runner.go:130] > # shared_cpuset = ""
	I0916 11:12:27.348330   40135 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 11:12:27.348336   40135 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 11:12:27.348341   40135 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 11:12:27.348349   40135 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 11:12:27.348354   40135 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 11:12:27.348359   40135 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 11:12:27.348368   40135 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 11:12:27.348371   40135 command_runner.go:130] > # enable_criu_support = false
	I0916 11:12:27.348377   40135 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 11:12:27.348385   40135 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 11:12:27.348389   40135 command_runner.go:130] > # enable_pod_events = false
	I0916 11:12:27.348397   40135 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:12:27.348405   40135 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:12:27.348410   40135 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 11:12:27.348416   40135 command_runner.go:130] > # default_runtime = "runc"
	I0916 11:12:27.348421   40135 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 11:12:27.348430   40135 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 11:12:27.348443   40135 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 11:12:27.348450   40135 command_runner.go:130] > # creation as a file is not desired either.
	I0916 11:12:27.348458   40135 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 11:12:27.348463   40135 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 11:12:27.348470   40135 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 11:12:27.348473   40135 command_runner.go:130] > # ]
	I0916 11:12:27.348487   40135 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 11:12:27.348493   40135 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 11:12:27.348501   40135 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 11:12:27.348508   40135 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 11:12:27.348511   40135 command_runner.go:130] > #
	I0916 11:12:27.348516   40135 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 11:12:27.348522   40135 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 11:12:27.348540   40135 command_runner.go:130] > # runtime_type = "oci"
	I0916 11:12:27.348546   40135 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 11:12:27.348551   40135 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 11:12:27.348557   40135 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 11:12:27.348562   40135 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 11:12:27.348568   40135 command_runner.go:130] > # monitor_env = []
	I0916 11:12:27.348573   40135 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 11:12:27.348579   40135 command_runner.go:130] > # allowed_annotations = []
	I0916 11:12:27.348584   40135 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 11:12:27.348590   40135 command_runner.go:130] > # Where:
	I0916 11:12:27.348595   40135 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 11:12:27.348603   40135 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 11:12:27.348612   40135 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 11:12:27.348618   40135 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 11:12:27.348623   40135 command_runner.go:130] > #   in $PATH.
	I0916 11:12:27.348629   40135 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 11:12:27.348636   40135 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 11:12:27.348642   40135 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 11:12:27.348647   40135 command_runner.go:130] > #   state.
	I0916 11:12:27.348654   40135 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 11:12:27.348662   40135 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 11:12:27.348670   40135 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 11:12:27.348676   40135 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 11:12:27.348682   40135 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 11:12:27.348690   40135 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 11:12:27.348696   40135 command_runner.go:130] > #   The currently recognized values are:
	I0916 11:12:27.348704   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 11:12:27.348713   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 11:12:27.348721   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 11:12:27.348727   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 11:12:27.348736   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 11:12:27.348744   40135 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 11:12:27.348751   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 11:12:27.348759   40135 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 11:12:27.348766   40135 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 11:12:27.348774   40135 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 11:12:27.348781   40135 command_runner.go:130] > #   deprecated option "conmon".
	I0916 11:12:27.348788   40135 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 11:12:27.348795   40135 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 11:12:27.348801   40135 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 11:12:27.348808   40135 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 11:12:27.348814   40135 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 11:12:27.348820   40135 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 11:12:27.348827   40135 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 11:12:27.348834   40135 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 11:12:27.348837   40135 command_runner.go:130] > #
	I0916 11:12:27.348842   40135 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 11:12:27.348846   40135 command_runner.go:130] > #
	I0916 11:12:27.348852   40135 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 11:12:27.348859   40135 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 11:12:27.348865   40135 command_runner.go:130] > #
	I0916 11:12:27.348874   40135 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 11:12:27.348882   40135 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 11:12:27.348886   40135 command_runner.go:130] > #
	I0916 11:12:27.348894   40135 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 11:12:27.348898   40135 command_runner.go:130] > # feature.
	I0916 11:12:27.348902   40135 command_runner.go:130] > #
	I0916 11:12:27.348908   40135 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 11:12:27.348917   40135 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 11:12:27.348925   40135 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 11:12:27.348933   40135 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 11:12:27.348940   40135 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 11:12:27.348949   40135 command_runner.go:130] > #
	I0916 11:12:27.348956   40135 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 11:12:27.348964   40135 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 11:12:27.348967   40135 command_runner.go:130] > #
	I0916 11:12:27.348974   40135 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 11:12:27.348981   40135 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 11:12:27.348984   40135 command_runner.go:130] > #
	I0916 11:12:27.348992   40135 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 11:12:27.348998   40135 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 11:12:27.349003   40135 command_runner.go:130] > # limitation.
	I0916 11:12:27.349008   40135 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 11:12:27.349014   40135 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 11:12:27.349018   40135 command_runner.go:130] > runtime_type = "oci"
	I0916 11:12:27.349024   40135 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 11:12:27.349028   40135 command_runner.go:130] > runtime_config_path = ""
	I0916 11:12:27.349034   40135 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 11:12:27.349038   40135 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 11:12:27.349044   40135 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 11:12:27.349048   40135 command_runner.go:130] > monitor_env = [
	I0916 11:12:27.349056   40135 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:12:27.349059   40135 command_runner.go:130] > ]
	I0916 11:12:27.349064   40135 command_runner.go:130] > privileged_without_host_devices = false
	I0916 11:12:27.349084   40135 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 11:12:27.349094   40135 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 11:12:27.349101   40135 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 11:12:27.349110   40135 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 11:12:27.349120   40135 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 11:12:27.349140   40135 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 11:12:27.349157   40135 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 11:12:27.349169   40135 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 11:12:27.349177   40135 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 11:12:27.349187   40135 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 11:12:27.349192   40135 command_runner.go:130] > # Example:
	I0916 11:12:27.349198   40135 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 11:12:27.349204   40135 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 11:12:27.349209   40135 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 11:12:27.349216   40135 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 11:12:27.349220   40135 command_runner.go:130] > # cpuset = 0
	I0916 11:12:27.349224   40135 command_runner.go:130] > # cpushares = "0-1"
	I0916 11:12:27.349229   40135 command_runner.go:130] > # Where:
	I0916 11:12:27.349234   40135 command_runner.go:130] > # The workload name is workload-type.
	I0916 11:12:27.349242   40135 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 11:12:27.349250   40135 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 11:12:27.349255   40135 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 11:12:27.349265   40135 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 11:12:27.349272   40135 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 11:12:27.349279   40135 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 11:12:27.349286   40135 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 11:12:27.349292   40135 command_runner.go:130] > # Default value is set to true
	I0916 11:12:27.349296   40135 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 11:12:27.349303   40135 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 11:12:27.349308   40135 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 11:12:27.349314   40135 command_runner.go:130] > # Default value is set to 'false'
	I0916 11:12:27.349318   40135 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 11:12:27.349324   40135 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 11:12:27.349330   40135 command_runner.go:130] > #
	I0916 11:12:27.349336   40135 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 11:12:27.349342   40135 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 11:12:27.349348   40135 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 11:12:27.349354   40135 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 11:12:27.349359   40135 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 11:12:27.349363   40135 command_runner.go:130] > [crio.image]
	I0916 11:12:27.349368   40135 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 11:12:27.349372   40135 command_runner.go:130] > # default_transport = "docker://"
	I0916 11:12:27.349378   40135 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 11:12:27.349384   40135 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:12:27.349387   40135 command_runner.go:130] > # global_auth_file = ""
	I0916 11:12:27.349392   40135 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 11:12:27.349396   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.349400   40135 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 11:12:27.349406   40135 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 11:12:27.349411   40135 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:12:27.349415   40135 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:12:27.349419   40135 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 11:12:27.349424   40135 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 11:12:27.349430   40135 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 11:12:27.349435   40135 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 11:12:27.349441   40135 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 11:12:27.349445   40135 command_runner.go:130] > # pause_command = "/pause"
	I0916 11:12:27.349450   40135 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 11:12:27.349456   40135 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 11:12:27.349461   40135 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 11:12:27.349468   40135 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 11:12:27.349476   40135 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 11:12:27.349482   40135 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 11:12:27.349488   40135 command_runner.go:130] > # pinned_images = [
	I0916 11:12:27.349491   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349498   40135 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 11:12:27.349506   40135 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 11:12:27.349513   40135 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 11:12:27.349525   40135 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 11:12:27.349533   40135 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 11:12:27.349539   40135 command_runner.go:130] > # signature_policy = ""
	I0916 11:12:27.349544   40135 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 11:12:27.349553   40135 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 11:12:27.349561   40135 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 11:12:27.349567   40135 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 11:12:27.349575   40135 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 11:12:27.349579   40135 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 11:12:27.349587   40135 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 11:12:27.349595   40135 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 11:12:27.349599   40135 command_runner.go:130] > # changing them here.
	I0916 11:12:27.349610   40135 command_runner.go:130] > # insecure_registries = [
	I0916 11:12:27.349613   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349620   40135 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 11:12:27.349626   40135 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 11:12:27.349630   40135 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 11:12:27.349635   40135 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 11:12:27.349642   40135 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 11:12:27.349648   40135 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 11:12:27.349653   40135 command_runner.go:130] > # CNI plugins.
	I0916 11:12:27.349657   40135 command_runner.go:130] > [crio.network]
	I0916 11:12:27.349663   40135 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 11:12:27.349670   40135 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 11:12:27.349674   40135 command_runner.go:130] > # cni_default_network = ""
	I0916 11:12:27.349688   40135 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 11:12:27.349692   40135 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 11:12:27.349700   40135 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 11:12:27.349706   40135 command_runner.go:130] > # plugin_dirs = [
	I0916 11:12:27.349710   40135 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 11:12:27.349716   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349721   40135 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 11:12:27.349727   40135 command_runner.go:130] > [crio.metrics]
	I0916 11:12:27.349732   40135 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 11:12:27.349739   40135 command_runner.go:130] > enable_metrics = true
	I0916 11:12:27.349743   40135 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 11:12:27.349751   40135 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 11:12:27.349757   40135 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 11:12:27.349765   40135 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 11:12:27.349772   40135 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 11:12:27.349777   40135 command_runner.go:130] > # metrics_collectors = [
	I0916 11:12:27.349782   40135 command_runner.go:130] > # 	"operations",
	I0916 11:12:27.349787   40135 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 11:12:27.349793   40135 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 11:12:27.349798   40135 command_runner.go:130] > # 	"operations_errors",
	I0916 11:12:27.349804   40135 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 11:12:27.349808   40135 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 11:12:27.349814   40135 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 11:12:27.349818   40135 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 11:12:27.349824   40135 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 11:12:27.349828   40135 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 11:12:27.349835   40135 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 11:12:27.349839   40135 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 11:12:27.349845   40135 command_runner.go:130] > # 	"containers_oom_total",
	I0916 11:12:27.349850   40135 command_runner.go:130] > # 	"containers_oom",
	I0916 11:12:27.349856   40135 command_runner.go:130] > # 	"processes_defunct",
	I0916 11:12:27.349860   40135 command_runner.go:130] > # 	"operations_total",
	I0916 11:12:27.349867   40135 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 11:12:27.349875   40135 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 11:12:27.349882   40135 command_runner.go:130] > # 	"operations_errors_total",
	I0916 11:12:27.349886   40135 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 11:12:27.349892   40135 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 11:12:27.349897   40135 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 11:12:27.349903   40135 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 11:12:27.349907   40135 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 11:12:27.349914   40135 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 11:12:27.349919   40135 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 11:12:27.349925   40135 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 11:12:27.349928   40135 command_runner.go:130] > # ]
	I0916 11:12:27.349934   40135 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 11:12:27.349939   40135 command_runner.go:130] > # metrics_port = 9090
	I0916 11:12:27.349944   40135 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 11:12:27.349950   40135 command_runner.go:130] > # metrics_socket = ""
	I0916 11:12:27.349954   40135 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 11:12:27.349962   40135 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 11:12:27.349971   40135 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 11:12:27.349977   40135 command_runner.go:130] > # certificate on any modification event.
	I0916 11:12:27.349981   40135 command_runner.go:130] > # metrics_cert = ""
	I0916 11:12:27.349988   40135 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 11:12:27.349994   40135 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 11:12:27.349999   40135 command_runner.go:130] > # metrics_key = ""
	I0916 11:12:27.350005   40135 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 11:12:27.350010   40135 command_runner.go:130] > [crio.tracing]
	I0916 11:12:27.350016   40135 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 11:12:27.350029   40135 command_runner.go:130] > # enable_tracing = false
	I0916 11:12:27.350034   40135 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 11:12:27.350041   40135 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 11:12:27.350048   40135 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 11:12:27.350054   40135 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 11:12:27.350058   40135 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 11:12:27.350064   40135 command_runner.go:130] > [crio.nri]
	I0916 11:12:27.350068   40135 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 11:12:27.350074   40135 command_runner.go:130] > # enable_nri = false
	I0916 11:12:27.350079   40135 command_runner.go:130] > # NRI socket to listen on.
	I0916 11:12:27.350085   40135 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 11:12:27.350090   40135 command_runner.go:130] > # NRI plugin directory to use.
	I0916 11:12:27.350096   40135 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 11:12:27.350101   40135 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 11:12:27.350108   40135 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 11:12:27.350114   40135 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 11:12:27.350120   40135 command_runner.go:130] > # nri_disable_connections = false
	I0916 11:12:27.350126   40135 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 11:12:27.350132   40135 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 11:12:27.350137   40135 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 11:12:27.350144   40135 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 11:12:27.350150   40135 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 11:12:27.350155   40135 command_runner.go:130] > [crio.stats]
	I0916 11:12:27.350161   40135 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 11:12:27.350168   40135 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 11:12:27.350172   40135 command_runner.go:130] > # stats_collection_period = 0
	I0916 11:12:27.350235   40135 cni.go:84] Creating CNI manager for ""
	I0916 11:12:27.350246   40135 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 11:12:27.350255   40135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:27.350273   40135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-736061 NodeName:multinode-736061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:27.350419   40135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-736061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:27.350474   40135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:27.361566   40135 command_runner.go:130] > kubeadm
	I0916 11:12:27.361580   40135 command_runner.go:130] > kubectl
	I0916 11:12:27.361584   40135 command_runner.go:130] > kubelet
	I0916 11:12:27.361736   40135 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:27.361782   40135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:27.372014   40135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:12:27.391186   40135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:27.408090   40135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 11:12:27.425238   40135 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:27.429573   40135 command_runner.go:130] > 192.168.39.32	control-plane.minikube.internal
	I0916 11:12:27.429655   40135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:27.566945   40135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:27.581910   40135 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.32
	I0916 11:12:27.581936   40135 certs.go:194] generating shared ca certs ...
	I0916 11:12:27.581957   40135 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:27.582115   40135 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:12:27.582167   40135 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:12:27.582177   40135 certs.go:256] generating profile certs ...
	I0916 11:12:27.582249   40135 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key
	I0916 11:12:27.582305   40135 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7
	I0916 11:12:27.582343   40135 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key
	I0916 11:12:27.582354   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:12:27.582365   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:12:27.582378   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:12:27.582390   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:12:27.582400   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:12:27.582410   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:12:27.582423   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:12:27.582436   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:12:27.582483   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:12:27.582509   40135 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:27.582518   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:27.582550   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:12:27.582574   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:27.582595   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:12:27.582631   40135 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:12:27.582655   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.582667   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.582679   40135 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.583263   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:27.609531   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:27.634944   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:27.660493   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:27.685235   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:12:27.708765   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:12:27.733626   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:27.757830   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:12:27.782527   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:27.806733   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:12:27.831538   40135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:12:27.856224   40135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:27.873368   40135 ssh_runner.go:195] Run: openssl version
	I0916 11:12:27.879163   40135 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:12:27.879396   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:12:27.890038   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894595   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894654   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.894716   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:12:27.919619   40135 command_runner.go:130] > 51391683
	I0916 11:12:27.920420   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:27.932003   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:12:27.943754   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948079   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948103   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.948147   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:12:27.953662   40135 command_runner.go:130] > 3ec20f2e
	I0916 11:12:27.953740   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:27.963952   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:27.975088   40135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979448   40135 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979467   40135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.979508   40135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:27.984970   40135 command_runner.go:130] > b5213941
	I0916 11:12:27.985201   40135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:27.995006   40135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:27.999529   40135 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:27.999557   40135 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 11:12:27.999566   40135 command_runner.go:130] > Device: 253,1	Inode: 2101800     Links: 1
	I0916 11:12:27.999605   40135 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:12:27.999620   40135 command_runner.go:130] > Access: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999631   40135 command_runner.go:130] > Modify: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999639   40135 command_runner.go:130] > Change: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999648   40135 command_runner.go:130] >  Birth: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:12:27.999698   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:12:28.005429   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.005492   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:12:28.010927   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.011069   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:12:28.016675   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.016733   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:12:28.022268   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.022386   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:12:28.027951   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.028023   40135 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:12:28.033400   40135 command_runner.go:130] > Certificate will not expire
	I0916 11:12:28.033473   40135 kubeadm.go:392] StartCluster: {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.60 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:28.033571   40135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:28.033610   40135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:28.072849   40135 command_runner.go:130] > 840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd
	I0916 11:12:28.072892   40135 command_runner.go:130] > 02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198
	I0916 11:12:28.072902   40135 command_runner.go:130] > 7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0
	I0916 11:12:28.072914   40135 command_runner.go:130] > f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee
	I0916 11:12:28.072924   40135 command_runner.go:130] > b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762
	I0916 11:12:28.072933   40135 command_runner.go:130] > 769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24
	I0916 11:12:28.072942   40135 command_runner.go:130] > d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba
	I0916 11:12:28.072951   40135 command_runner.go:130] > ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7
	I0916 11:12:28.072976   40135 cri.go:89] found id: "840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd"
	I0916 11:12:28.072988   40135 cri.go:89] found id: "02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198"
	I0916 11:12:28.072993   40135 cri.go:89] found id: "7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0"
	I0916 11:12:28.072998   40135 cri.go:89] found id: "f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee"
	I0916 11:12:28.073002   40135 cri.go:89] found id: "b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762"
	I0916 11:12:28.073007   40135 cri.go:89] found id: "769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24"
	I0916 11:12:28.073010   40135 cri.go:89] found id: "d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba"
	I0916 11:12:28.073014   40135 cri.go:89] found id: "ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7"
	I0916 11:12:28.073018   40135 cri.go:89] found id: ""
	I0916 11:12:28.073069   40135 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.502349529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=698b6c88-48c5-413c-a1c9-57086cbbec84 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.503556984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d74bf84-ae37-4072-9bf9-6518bed3ebda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.504144243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485394504117221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d74bf84-ae37-4072-9bf9-6518bed3ebda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.506718986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c77466b2-ca12-41a5-aae9-fddb686927ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.506797919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c77466b2-ca12-41a5-aae9-fddb686927ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.507156318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c77466b2-ca12-41a5-aae9-fddb686927ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.554529416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=713a2c21-b1be-46a4-9573-bb0e63aaa1d3 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.554608578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=713a2c21-b1be-46a4-9573-bb0e63aaa1d3 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.555925869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91f6370f-ce56-456e-b798-071414a1f9e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.556383338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485394556356481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91f6370f-ce56-456e-b798-071414a1f9e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.556987044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d41e6b85-22cb-4100-9487-9df490dd8efc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.557072187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d41e6b85-22cb-4100-9487-9df490dd8efc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.557481300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d41e6b85-22cb-4100-9487-9df490dd8efc name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.599179428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84b23e0e-c86f-41f4-bd3c-1b14a315950e name=/runtime.v1.RuntimeService/Version
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.599250722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84b23e0e-c86f-41f4-bd3c-1b14a315950e name=/runtime.v1.RuntimeService/Version
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.601005322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f00cfbbe-bc5c-4dcc-9eb9-28b39101e343 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.601454081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485394601430964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f00cfbbe-bc5c-4dcc-9eb9-28b39101e343 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.601941350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6cdb51f-e8a5-4e5e-87b6-a76475fe4013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.602028665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6cdb51f-e8a5-4e5e-87b6-a76475fe4013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.602525571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6cdb51f-e8a5-4e5e-87b6-a76475fe4013 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.624181143Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dc1430f8-0e5a-4f05-9842-ed32ecd8ccf0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.624626123Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-g9fqk,Uid:0dd08783-fcfd-441f-8bda-c82c0c15173e,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485188017768486,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:12:33.869063406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlhl2,Uid:6ea84b9d-f364-4e26-8dc8-44c3b4d92417,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726485154299572003,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:12:33.869052589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485154247445140,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T11:12:33.869062283Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&PodSandboxMetadata{Name:kindnet-qb4tq,Uid:933f0749-7868-4e96-9b8e-67005545bbc5,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1726485154231875524,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:12:33.869064619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&PodSandboxMetadata{Name:kube-proxy-ftj9p,Uid:fa72720f-1c4a-46a2-a733-f411ccb6f628,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485154228924961,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:12:33.869060789Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&PodSandboxMetadata{Name:etcd-multinode-736061,Uid:69d3e8c6e76d0bc1af3482326f7904d1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485150375026645,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.32:2379,kubernetes.io/config.hash: 69d3e8c6e76d0bc1af3482326f7904d1,kubernetes.io/config.seen: 2024-09-16T11:12:29.869179636Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-736061,Uid:de66983060c1e167c6b9498eb8b0a025,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485150364201616,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de66983060c1e167c6b9498eb8b0a025,kubernetes.io/config.seen: 2024-09-16T11:12:29.869185279Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-736061,Uid:94d3338940ee73a61a5075650d027904,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485150350931200,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94d3338940ee73a61a5075650d027904,kubernetes.io/config.seen: 2024-09-16T11:12:29.869184368Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-736061,Uid:efede0e1597c8cbe70740f3169f7ec4a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726485150346479971,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.32:8443,kubernete
s.io/config.hash: efede0e1597c8cbe70740f3169f7ec4a,kubernetes.io/config.seen: 2024-09-16T11:12:29.869182998Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-g9fqk,Uid:0dd08783-fcfd-441f-8bda-c82c0c15173e,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484825338061434,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:07:04.426124339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,Namespace:kube-system,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1726484771573698857,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\"
:\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T11:06:11.265431677Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlhl2,Uid:6ea84b9d-f364-4e26-8dc8-44c3b4d92417,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484771564797658,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:06:11.259014727Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&PodSandboxMetadata{Name:kindnet-qb4tq,Uid:933f0749-7868-4e96-9b8e-67005545bbc5,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484759246903630,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:05:58.909685146Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&PodSandboxMetadata{Name:kube-proxy-ftj9p,Uid:fa72720f-1c4a-46a2-a733-f411ccb6f628,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484759243855837,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,k8s-app: kube-
proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:05:58.903893377Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&PodSandboxMetadata{Name:etcd-multinode-736061,Uid:69d3e8c6e76d0bc1af3482326f7904d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484748211950583,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.32:2379,kubernetes.io/config.hash: 69d3e8c6e76d0bc1af3482326f7904d1,kubernetes.io/config.seen: 2024-09-16T11:05:47.723816592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a5377
36d73a9f20,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-736061,Uid:94d3338940ee73a61a5075650d027904,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484748209922555,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94d3338940ee73a61a5075650d027904,kubernetes.io/config.seen: 2024-09-16T11:05:47.723825749Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-736061,Uid:de66983060c1e167c6b9498eb8b0a025,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484748209264738,Labels:map[string]string{component: kube-scheduler,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: de66983060c1e167c6b9498eb8b0a025,kubernetes.io/config.seen: 2024-09-16T11:05:47.723827022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-736061,Uid:efede0e1597c8cbe70740f3169f7ec4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726484748189792854,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 1
92.168.39.32:8443,kubernetes.io/config.hash: efede0e1597c8cbe70740f3169f7ec4a,kubernetes.io/config.seen: 2024-09-16T11:05:47.723824313Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dc1430f8-0e5a-4f05-9842-ed32ecd8ccf0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.625436785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4ce886c-3bf1-4f76-bd88-382b9a2b219d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.625518113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4ce886c-3bf1-4f76-bd88-382b9a2b219d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:16:34 multinode-736061 crio[2989]: time="2024-09-16 11:16:34.626087088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485188158372438,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485154742693212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485154656393416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485154505906722,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485154436680778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485150640534512,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485150608915249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485150554479561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485150539003440,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b,PodSandboxId:779060032a611374116cc1e94df9c7e4a6c1443abec99f7abdef9464025a6169,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726484826321999428,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd,PodSandboxId:19286465f900afb5c6da745e2d4672e76d0bd87c33d561cb8eff8479eb72b5c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726484771766267901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198,PodSandboxId:01381d4d113d1f5aa2f6b3834d0194f5b4909599f84a8b29647fabd8f0fa5f7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726484771695970386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0,PodSandboxId:bd141ffff1a91e8b17d63ca2b8898889ab336bfde15ecdc79964aafaa1123465,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726484759715057078,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee,PodSandboxId:cc5264d1c4b520dfff86dfc83596919ff23f9336b58339ea2a10f4492ba02b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726484759520373663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733
-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762,PodSandboxId:f771edf6fcef2eb32fd9b47472910c255ef2ee1f910f415da1ca1b8287fece1e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726484748620399557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24,PodSandboxId:6237db42cfa9d23e63f16ff0ffa961fcbc1f9f25138e7c7fc11442d28aeecaff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726484748618867302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba,PodSandboxId:c1754b1d745471f13f360eb1605866caa0a1b248d76224212a537736d73a9f20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726484748609890980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7,PodSandboxId:06f23871be821ecc20f321f0e711ab8e47f7a0308e547d26ab3365eaa22b0b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726484748471628064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4ce886c-3bf1-4f76-bd88-382b9a2b219d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	522d3b85a4548       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   c27596adc9769       busybox-7dff88458-g9fqk
	34160c655e5ab       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               1                   d6609b6804e21       kindnet-qb4tq
	35a7839cd57d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   78066c652dd8f       coredns-7c65d6cfc9-nlhl2
	87a99d0015cbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   b06a4343bbdd3       storage-provisioner
	2d81e17eebccf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   fcfacdd69a46c       kube-proxy-ftj9p
	2e7284c90c8c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   d9afb21537018       kube-scheduler-multinode-736061
	ae1251600e6e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   cd4168d0828d2       etcd-multinode-736061
	8fa850b5495ff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   f4286a53710f2       kube-apiserver-multinode-736061
	126fd7058d64d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   113acd43d732e       kube-controller-manager-multinode-736061
	84517e6af45b4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   779060032a611       busybox-7dff88458-g9fqk
	840a587a0926e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   19286465f900a       coredns-7c65d6cfc9-nlhl2
	02223ab182498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   01381d4d113d1       storage-provisioner
	7a89ff755837a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   bd141ffff1a91       kindnet-qb4tq
	f8c55edbe2173       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   cc5264d1c4b52       kube-proxy-ftj9p
	b76d5d4ad419a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   f771edf6fcef2       kube-scheduler-multinode-736061
	769a75ad1934a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   6237db42cfa9d       etcd-multinode-736061
	d53f9aec7bc35       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   c1754b1d74547       kube-controller-manager-multinode-736061
	ed73e9089f633       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   06f23871be821       kube-apiserver-multinode-736061
	
	
	==> coredns [35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40656 - 6477 "HINFO IN 2586289926805624417.1154026984614338138. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767921s
	
	
	==> coredns [840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd] <==
	[INFO] 10.244.0.3:48472 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859185s
	[INFO] 10.244.0.3:58999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160969s
	[INFO] 10.244.0.3:35408 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007258s
	[INFO] 10.244.0.3:41914 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221958s
	[INFO] 10.244.0.3:51441 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075035s
	[INFO] 10.244.0.3:54367 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064081s
	[INFO] 10.244.0.3:51073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061874s
	[INFO] 10.244.1.2:38827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130826s
	[INFO] 10.244.1.2:49788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142283s
	[INFO] 10.244.1.2:43407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083078s
	[INFO] 10.244.1.2:35506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123825s
	[INFO] 10.244.0.3:35311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008958s
	[INFO] 10.244.0.3:44801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055108s
	[INFO] 10.244.0.3:45405 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039898s
	[INFO] 10.244.0.3:53790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037364s
	[INFO] 10.244.1.2:44863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136337s
	[INFO] 10.244.1.2:38345 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000494388s
	[INFO] 10.244.1.2:36190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000247796s
	[INFO] 10.244.1.2:38755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120111s
	[INFO] 10.244.0.3:58238 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129373s
	[INFO] 10.244.0.3:55519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102337s
	[INFO] 10.244.0.3:60945 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061359s
	[INFO] 10.244.0.3:52747 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010905s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-736061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:16:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:12:33 +0000   Mon, 16 Sep 2024 11:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-736061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60fe80618d4f42e281d4c50393e9d89e
	  System UUID:                60fe8061-8d4f-42e2-81d4-c50393e9d89e
	  Boot ID:                    d046d280-229f-4e9a-8a6c-1986374da911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g9fqk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 coredns-7c65d6cfc9-nlhl2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-736061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-qb4tq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-736061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-736061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-ftj9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-736061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-736061 status is now: NodeReady
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m5s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m5s)  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m5s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m58s                node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	
	
	Name:               multinode-736061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_13_11_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:13:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:14:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:14:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:14:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 11:13:40 +0000   Mon, 16 Sep 2024 11:14:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-736061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fe337504134150bccd557919449b29
	  System UUID:                d4fe3375-0413-4150-bccd-557919449b29
	  Boot ID:                    d98e6a6c-e943-4dd6-9c7a-051fe2e4235b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7dvrx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-xlrxb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m52s
	  kube-system                 kube-proxy-8h6jp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m52s (x2 over 9m52s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s (x2 over 9m52s)  kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s (x2 over 9m52s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m33s                  kubelet          Node multinode-736061-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-736061-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-736061-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.065798] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064029] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.188943] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.125437] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281577] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.899790] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.897000] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059824] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.997335] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.078309] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.139976] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.076513] kauditd_printk_skb: 18 callbacks suppressed
	[Sep16 11:06] kauditd_printk_skb: 69 callbacks suppressed
	[Sep16 11:07] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 11:12] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +0.148062] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +0.171344] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +0.138643] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.279343] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +0.718595] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +2.178122] systemd-fstab-generator[3193]: Ignoring "noauto" option for root device
	[  +4.699068] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.680556] systemd-fstab-generator[4044]: Ignoring "noauto" option for root device
	[  +0.106179] kauditd_printk_skb: 34 callbacks suppressed
	[Sep16 11:13] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24] <==
	{"level":"info","ts":"2024-09-16T11:05:49.392766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:05:49.393463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:06:03.777149Z","caller":"traceutil/trace.go:171","msg":"trace[927915415] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"125.996547ms","start":"2024-09-16T11:06:03.651108Z","end":"2024-09-16T11:06:03.777104Z","steps":["trace[927915415] 'process raft request'  (duration: 125.663993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:06:42.434928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.290318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316539574759162275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-736061-m02.17f5b4c7bf86ac19\" value_size:642 lease:7316539574759161296 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T11:06:42.435173Z","caller":"traceutil/trace.go:171","msg":"trace[736335181] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"242.745028ms","start":"2024-09-16T11:06:42.192402Z","end":"2024-09-16T11:06:42.435147Z","steps":["trace[736335181] 'process raft request'  (duration: 86.752839ms)","trace[736335181] 'compare'  (duration: 155.030741ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:06:42.435488Z","caller":"traceutil/trace.go:171","msg":"trace[1491776336] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"164.53116ms","start":"2024-09-16T11:06:42.270945Z","end":"2024-09-16T11:06:42.435476Z","steps":["trace[1491776336] 'process raft request'  (duration: 164.128437ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:36.191017Z","caller":"traceutil/trace.go:171","msg":"trace[1370350330] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"135.211812ms","start":"2024-09-16T11:07:36.055773Z","end":"2024-09-16T11:07:36.190985Z","steps":["trace[1370350330] 'read index received'  (duration: 127.332155ms)","trace[1370350330] 'applied index is now lower than readState.Index'  (duration: 7.878564ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:36.191190Z","caller":"traceutil/trace.go:171","msg":"trace[1606896706] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"230.440734ms","start":"2024-09-16T11:07:35.960732Z","end":"2024-09-16T11:07:36.191172Z","steps":["trace[1606896706] 'process raft request'  (duration: 222.394697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T11:07:36.191504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.712787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T11:07:36.191575Z","caller":"traceutil/trace.go:171","msg":"trace[641878152] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:0; response_revision:598; }","duration":"135.807158ms","start":"2024-09-16T11:07:36.055751Z","end":"2024-09-16T11:07:36.191558Z","steps":["trace[641878152] 'agreement among raft nodes before linearized reading'  (duration: 135.656463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:07:43.320131Z","caller":"traceutil/trace.go:171","msg":"trace[1026367264] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:677; }","duration":"256.510329ms","start":"2024-09-16T11:07:43.063604Z","end":"2024-09-16T11:07:43.320115Z","steps":["trace[1026367264] 'read index received'  (duration: 208.747621ms)","trace[1026367264] 'applied index is now lower than readState.Index'  (duration: 47.76201ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:07:43.320580Z","caller":"traceutil/trace.go:171","msg":"trace[845413732] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"283.063625ms","start":"2024-09-16T11:07:43.037497Z","end":"2024-09-16T11:07:43.320560Z","steps":["trace[845413732] 'process raft request'  (duration: 234.904981ms)","trace[845413732] 'compare'  (duration: 47.473062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:07:43.320947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.339861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-736061-m03\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-16T11:07:43.321022Z","caller":"traceutil/trace.go:171","msg":"trace[1372162398] range","detail":"{range_begin:/registry/minions/multinode-736061-m03; range_end:; response_count:1; response_revision:640; }","duration":"257.429414ms","start":"2024-09-16T11:07:43.063585Z","end":"2024-09-16T11:07:43.321014Z","steps":["trace[1372162398] 'agreement among raft nodes before linearized reading'  (duration: 257.097073ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:32.848686Z","caller":"traceutil/trace.go:171","msg":"trace[1433849770] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"176.13666ms","start":"2024-09-16T11:08:32.672526Z","end":"2024-09-16T11:08:32.848663Z","steps":["trace[1433849770] 'process raft request'  (duration: 175.720453ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:10:54.687328Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T11:10:54.687457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	{"level":"warn","ts":"2024-09-16T11:10:54.687629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.687676Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.689450Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:10:54.689531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T11:10:54.770633Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4c05646b7156589","current-leader-member-id":"d4c05646b7156589"}
	{"level":"info","ts":"2024-09-16T11:10:54.773137Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:10:54.773277Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:10:54.773343Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	
	
	==> etcd [ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526] <==
	{"level":"info","ts":"2024-09-16T11:12:31.076410Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-09-16T11:12:31.076610Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:31.076674Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:31.083484Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:31.096736Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:31.097022Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:31.097067Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:31.097111Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:12:31.097134Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:12:32.130362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.136512Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-736061 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:32.136525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.136756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.137155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137197Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.138897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-16T11:12:32.139181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:16:35 up 11 min,  0 users,  load average: 0.06, 0.31, 0.21
	Linux multinode-736061 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25] <==
	I0916 11:15:25.689118       1 main.go:299] handling current node
	I0916 11:15:35.682410       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:15:35.682617       1 main.go:299] handling current node
	I0916 11:15:35.682680       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:15:35.682705       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:15:45.681812       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:15:45.682040       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:15:45.682467       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:15:45.682508       1 main.go:299] handling current node
	I0916 11:15:55.685120       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:15:55.685391       1 main.go:299] handling current node
	I0916 11:15:55.685470       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:15:55.685513       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:05.685267       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:05.685392       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:05.685550       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:05.685583       1 main.go:299] handling current node
	I0916 11:16:15.690122       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:15.690152       1 main.go:299] handling current node
	I0916 11:16:15.690165       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:15.690169       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:25.689644       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:25.689765       1 main.go:299] handling current node
	I0916 11:16:25.689793       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:25.689811       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0] <==
	I0916 11:10:10.885622       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:20.882088       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:20.882177       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:20.882351       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:20.882379       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:20.882438       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:20.882445       1 main.go:299] handling current node
	I0916 11:10:30.882343       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:30.882485       1 main.go:299] handling current node
	I0916 11:10:30.882519       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:30.882538       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:30.882705       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:30.882730       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:40.881843       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:40.881966       1 main.go:299] handling current node
	I0916 11:10:40.881993       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:40.882011       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:40.882162       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:40.882241       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	I0916 11:10:50.885456       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:10:50.885505       1 main.go:299] handling current node
	I0916 11:10:50.885524       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:10:50.885530       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:10:50.885705       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0916 11:10:50.885712       1 main.go:322] Node multinode-736061-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d] <==
	I0916 11:12:33.498192       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:12:33.501874       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:12:33.508959       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:12:33.509043       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:12:33.509776       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:33.509828       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:33.509857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:33.546526       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:12:33.568509       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:33.568599       1 policy_source.go:224] refreshing policies
	I0916 11:12:33.589155       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:12:33.590889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:12:33.590927       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:12:33.591376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:12:33.596733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:12:33.620595       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:33.621748       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:12:34.423228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:35.891543       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:12:36.022725       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:36.049167       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:36.129506       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:36.139653       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:37.024276       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:37.124173       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7] <==
	W0916 11:10:54.717805       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0916 11:10:54.721617       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0916 11:10:54.721803       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W0916 11:10:54.722189       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0916 11:10:54.722608       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0916 11:10:54.722692       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0916 11:10:54.722807       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0916 11:10:54.722839       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0916 11:10:54.722854       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0916 11:10:54.722888       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0916 11:10:54.722907       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0916 11:10:54.722935       1 establishing_controller.go:92] Shutting down EstablishingController
	I0916 11:10:54.722948       1 naming_controller.go:305] Shutting down NamingConditionController
	I0916 11:10:54.722980       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0916 11:10:54.722994       1 controller.go:170] Shutting down OpenAPI controller
	I0916 11:10:54.723024       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0916 11:10:54.723033       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0916 11:10:54.723049       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0916 11:10:54.723078       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0916 11:10:54.723096       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0916 11:10:54.723124       1 controller.go:132] Ending legacy_token_tracking_controller
	I0916 11:10:54.723131       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0916 11:10:54.723263       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0916 11:10:54.723385       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0916 11:10:54.723607       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4] <==
	E0916 11:13:47.943787       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03"
	E0916 11:13:47.943838       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-736061-m03': failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 11:13:47.943877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.949840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.952982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:48.292993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:51.924112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:58.208795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:06.246940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.870268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:10.875842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:10.892575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:11.443344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:11.443755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:51.890757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:14:51.912413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:14:51.920581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.988064ms"
	I0916 11:14:51.920660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.285µs"
	I0916 11:14:57.052188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:15:16.791204       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bvqrg"
	I0916 11:15:16.816034       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bvqrg"
	I0916 11:15:16.816158       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5hctk"
	I0916 11:15:16.838568       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5hctk"
	
	
	==> kube-controller-manager [d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba] <==
	I0916 11:08:27.068836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:27.299944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:27.299986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.498604       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m03\" does not exist"
	I0916 11:08:28.499795       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:28.530214       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m03" podCIDRs=["10.244.4.0/24"]
	I0916 11:08:28.530257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.530321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:28.812678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:29.131881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:33.111007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:38.696548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:47.199515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:08:47.211278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:08:48.081832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:28.097328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:28.097948       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m03"
	I0916 11:09:28.128518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:28.176986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.051461ms"
	I0916 11:09:28.177686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="101.301µs"
	I0916 11:09:33.174860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:33.196257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:09:33.196479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:09:43.270263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	
	
	==> kube-proxy [2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:12:34.892799       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:12:34.920138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:12:34.920279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:12:34.987651       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:12:34.987713       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:12:34.987739       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:12:34.996924       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:12:34.997221       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:12:34.997234       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:35.007220       1 config.go:199] "Starting service config controller"
	I0916 11:12:35.029098       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:12:35.025409       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:12:35.029156       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:12:35.029162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:12:35.026457       1 config.go:328] "Starting node config controller"
	I0916 11:12:35.029234       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:12:35.130341       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:12:35.130407       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:05:59.852422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:05:59.886836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:05:59.886976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:05:59.944125       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:05:59.944160       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:05:59.944181       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:05:59.947733       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:05:59.948149       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:05:59.948393       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:05:59.949794       1 config.go:199] "Starting service config controller"
	I0916 11:05:59.949862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:05:59.950230       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:05:59.950374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:05:59.950923       1 config.go:328] "Starting node config controller"
	I0916 11:05:59.952219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:06:00.050768       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:06:00.050862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:06:00.052567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d] <==
	I0916 11:12:31.748594       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:12:33.440575       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:12:33.440623       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:12:33.440633       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:12:33.440641       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:12:33.526991       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:12:33.527040       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:33.536502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:12:33.536670       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:12:33.540976       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:12:33.544844       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:12:33.638485       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762] <==
	E0916 11:05:52.226438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.286013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:05:52.286065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.292630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:05:52.292712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.303069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:05:52.303177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.308000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:05:52.308078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.326647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.326746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.367616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:05:52.367800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.407350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:05:52.407398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.423030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:05:52.423081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.501395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.501587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.597443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:05:52.597573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:05:52.652519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:05:52.652625       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:05:55.090829       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 11:10:54.693272       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 16 11:15:20 multinode-736061 kubelet[3200]: E0916 11:15:20.001936    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485320001495552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:15:29 multinode-736061 kubelet[3200]: E0916 11:15:29.922086    3200 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:15:29 multinode-736061 kubelet[3200]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:15:29 multinode-736061 kubelet[3200]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:15:29 multinode-736061 kubelet[3200]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:15:29 multinode-736061 kubelet[3200]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:15:30 multinode-736061 kubelet[3200]: E0916 11:15:30.004233    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485330003639144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:15:30 multinode-736061 kubelet[3200]: E0916 11:15:30.004272    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485330003639144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:15:40 multinode-736061 kubelet[3200]: E0916 11:15:40.006668    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485340005819540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:15:40 multinode-736061 kubelet[3200]: E0916 11:15:40.006743    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485340005819540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:15:50 multinode-736061 kubelet[3200]: E0916 11:15:50.009065    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485350008127369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:15:50 multinode-736061 kubelet[3200]: E0916 11:15:50.009092    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485350008127369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:00 multinode-736061 kubelet[3200]: E0916 11:16:00.010850    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485360010342798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:00 multinode-736061 kubelet[3200]: E0916 11:16:00.011466    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485360010342798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:10 multinode-736061 kubelet[3200]: E0916 11:16:10.013720    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485370013220592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:10 multinode-736061 kubelet[3200]: E0916 11:16:10.013849    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485370013220592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:20 multinode-736061 kubelet[3200]: E0916 11:16:20.016279    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485380015545080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:20 multinode-736061 kubelet[3200]: E0916 11:16:20.017187    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485380015545080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:29 multinode-736061 kubelet[3200]: E0916 11:16:29.923109    3200 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:16:29 multinode-736061 kubelet[3200]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:16:29 multinode-736061 kubelet[3200]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:16:29 multinode-736061 kubelet[3200]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:16:29 multinode-736061 kubelet[3200]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:16:30 multinode-736061 kubelet[3200]: E0916 11:16:30.020739    3200 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485390020049276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:16:30 multinode-736061 kubelet[3200]: E0916 11:16:30.020802    3200 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485390020049276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:16:34.179215   42090 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-736061 -n multinode-736061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (528.296µs)
helpers_test.go:263: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/StopMultiNode (141.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (188.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736061 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736061 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m5.82277074s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:396: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (461.157µs)
multinode_test.go:398: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-736061 -n multinode-736061
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 logs -n 25: (1.480911955s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m02_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m03 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp testdata/cp-test.txt                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061:/home/docker/cp-test_multinode-736061-m03_multinode-736061.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061 sudo cat                                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt                       | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m02:/home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n                                                                 | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | multinode-736061-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-736061 ssh -n multinode-736061-m02 sudo cat                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | /home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-736061 node stop m03                                                          | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| node    | multinode-736061 node start                                                             | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| stop    | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	| start   | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-736061                                                                | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC |                     |
	| node    | multinode-736061 node delete                                                            | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-736061 stop                                                                   | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC |                     |
	| start   | -p multinode-736061                                                                     | multinode-736061 | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                  |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:16:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:16:35.978805   42145 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:16:35.978909   42145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:35.978917   42145 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:35.978921   42145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:35.979108   42145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:16:35.979621   42145 out.go:352] Setting JSON to false
	I0916 11:16:35.980575   42145 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3546,"bootTime":1726481850,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:16:35.980665   42145 start.go:139] virtualization: kvm guest
	I0916 11:16:35.982865   42145 out.go:177] * [multinode-736061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:16:35.984546   42145 notify.go:220] Checking for updates...
	I0916 11:16:35.984569   42145 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:16:35.986392   42145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:16:35.987757   42145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:16:35.988938   42145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:16:35.990084   42145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:16:35.991194   42145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:16:35.992758   42145 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:16:35.993205   42145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:16:35.993270   42145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:16:36.008359   42145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I0916 11:16:36.008685   42145 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:16:36.009198   42145 main.go:141] libmachine: Using API Version  1
	I0916 11:16:36.009220   42145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:16:36.009492   42145 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:16:36.009699   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:16:36.009991   42145 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:16:36.010290   42145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:16:36.010331   42145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:16:36.024719   42145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40965
	I0916 11:16:36.025149   42145 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:16:36.025699   42145 main.go:141] libmachine: Using API Version  1
	I0916 11:16:36.025728   42145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:16:36.026107   42145 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:16:36.026295   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:16:36.062174   42145 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 11:16:36.063499   42145 start.go:297] selected driver: kvm2
	I0916 11:16:36.063519   42145 start.go:901] validating driver "kvm2" against &{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:fal
se logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:16:36.063655   42145 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:16:36.063986   42145 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:16:36.064061   42145 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:16:36.079113   42145 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:16:36.079945   42145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:16:36.080000   42145 cni.go:84] Creating CNI manager for ""
	I0916 11:16:36.080057   42145 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0916 11:16:36.080134   42145 start.go:340] cluster config:
	{Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:fal
se nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:16:36.080356   42145 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:16:36.082210   42145 out.go:177] * Starting "multinode-736061" primary control-plane node in "multinode-736061" cluster
	I0916 11:16:36.083406   42145 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:16:36.083433   42145 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:16:36.083439   42145 cache.go:56] Caching tarball of preloaded images
	I0916 11:16:36.083516   42145 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:16:36.083528   42145 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:16:36.083646   42145 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/config.json ...
	I0916 11:16:36.083903   42145 start.go:360] acquireMachinesLock for multinode-736061: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:16:36.083957   42145 start.go:364] duration metric: took 33.655µs to acquireMachinesLock for "multinode-736061"
	I0916 11:16:36.083976   42145 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:16:36.083985   42145 fix.go:54] fixHost starting: 
	I0916 11:16:36.084304   42145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:16:36.084339   42145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:16:36.098992   42145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0916 11:16:36.099458   42145 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:16:36.099954   42145 main.go:141] libmachine: Using API Version  1
	I0916 11:16:36.099977   42145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:16:36.100326   42145 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:16:36.100493   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:16:36.100639   42145 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:16:36.102074   42145 fix.go:112] recreateIfNeeded on multinode-736061: state=Running err=<nil>
	W0916 11:16:36.102109   42145 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:16:36.104107   42145 out.go:177] * Updating the running kvm2 "multinode-736061" VM ...
	I0916 11:16:36.105789   42145 machine.go:93] provisionDockerMachine start ...
	I0916 11:16:36.105812   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:16:36.106007   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:16:36.108248   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.108693   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.108726   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.108831   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:16:36.109007   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.109142   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.109268   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:16:36.109393   42145 main.go:141] libmachine: Using SSH client type: native
	I0916 11:16:36.109575   42145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:16:36.109587   42145 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:16:36.218836   42145 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:16:36.218865   42145 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:16:36.219119   42145 buildroot.go:166] provisioning hostname "multinode-736061"
	I0916 11:16:36.219147   42145 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:16:36.219338   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:16:36.221838   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.222200   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.222219   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.222352   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:16:36.222507   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.222634   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.222756   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:16:36.222923   42145 main.go:141] libmachine: Using SSH client type: native
	I0916 11:16:36.223101   42145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:16:36.223113   42145 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-736061 && echo "multinode-736061" | sudo tee /etc/hostname
	I0916 11:16:36.353454   42145 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-736061
	
	I0916 11:16:36.353485   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:16:36.356414   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.356815   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.356857   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.357147   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:16:36.357341   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.357496   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.357644   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:16:36.357851   42145 main.go:141] libmachine: Using SSH client type: native
	I0916 11:16:36.358024   42145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:16:36.358042   42145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-736061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-736061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-736061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:16:36.470083   42145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:16:36.470110   42145 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:16:36.470148   42145 buildroot.go:174] setting up certificates
	I0916 11:16:36.470158   42145 provision.go:84] configureAuth start
	I0916 11:16:36.470168   42145 main.go:141] libmachine: (multinode-736061) Calling .GetMachineName
	I0916 11:16:36.470422   42145 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:16:36.472830   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.473192   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.473217   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.473393   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:16:36.475494   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.475849   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.475883   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.476062   42145 provision.go:143] copyHostCerts
	I0916 11:16:36.476096   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:16:36.476148   42145 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:16:36.476160   42145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:16:36.476248   42145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:16:36.476409   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:16:36.476436   42145 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:16:36.476444   42145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:16:36.476490   42145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:16:36.476571   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:16:36.476594   42145 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:16:36.476600   42145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:16:36.476644   42145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:16:36.476726   42145 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.multinode-736061 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-736061]
	I0916 11:16:36.677908   42145 provision.go:177] copyRemoteCerts
	I0916 11:16:36.677967   42145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:16:36.677989   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:16:36.680302   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.680693   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.680724   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.680923   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:16:36.681108   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.681263   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:16:36.681380   42145 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:16:36.763749   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:16:36.763810   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:16:36.790504   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:16:36.790589   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:16:36.820467   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:16:36.820534   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:16:36.851611   42145 provision.go:87] duration metric: took 381.441601ms to configureAuth
	I0916 11:16:36.851642   42145 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:16:36.851867   42145 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:16:36.851965   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:16:36.854597   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.854964   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:16:36.854990   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:16:36.855220   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:16:36.855380   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.855577   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:16:36.855723   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:16:36.855875   42145 main.go:141] libmachine: Using SSH client type: native
	I0916 11:16:36.856036   42145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:16:36.856049   42145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:18:11.437799   42145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:18:11.437844   42145 machine.go:96] duration metric: took 1m35.332043379s to provisionDockerMachine
	I0916 11:18:11.437865   42145 start.go:293] postStartSetup for "multinode-736061" (driver="kvm2")
	I0916 11:18:11.437894   42145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:18:11.437943   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:18:11.438273   42145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:18:11.438304   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:18:11.441470   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.441958   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:18:11.441988   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.442130   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:18:11.442307   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:18:11.442474   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:18:11.442620   42145 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:18:11.529059   42145 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:18:11.533363   42145 command_runner.go:130] > NAME=Buildroot
	I0916 11:18:11.533393   42145 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 11:18:11.533400   42145 command_runner.go:130] > ID=buildroot
	I0916 11:18:11.533407   42145 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 11:18:11.533414   42145 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 11:18:11.533462   42145 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:18:11.533487   42145 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:18:11.533561   42145 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:18:11.533631   42145 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:18:11.533642   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /etc/ssl/certs/112032.pem
	I0916 11:18:11.533722   42145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:18:11.543953   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:18:11.569715   42145 start.go:296] duration metric: took 131.825988ms for postStartSetup
	I0916 11:18:11.569778   42145 fix.go:56] duration metric: took 1m35.485792246s for fixHost
	I0916 11:18:11.569812   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:18:11.572747   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.573143   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:18:11.573173   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.573355   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:18:11.573551   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:18:11.573725   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:18:11.573880   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:18:11.574039   42145 main.go:141] libmachine: Using SSH client type: native
	I0916 11:18:11.574215   42145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 11:18:11.574228   42145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:18:11.682012   42145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726485491.653409192
	
	I0916 11:18:11.682040   42145 fix.go:216] guest clock: 1726485491.653409192
	I0916 11:18:11.682054   42145 fix.go:229] Guest: 2024-09-16 11:18:11.653409192 +0000 UTC Remote: 2024-09-16 11:18:11.569792981 +0000 UTC m=+95.627416583 (delta=83.616211ms)
	I0916 11:18:11.682084   42145 fix.go:200] guest clock delta is within tolerance: 83.616211ms
	I0916 11:18:11.682092   42145 start.go:83] releasing machines lock for "multinode-736061", held for 1m35.598123147s
	I0916 11:18:11.682114   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:18:11.682359   42145 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:18:11.685077   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.685438   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:18:11.685469   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.685577   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:18:11.686154   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:18:11.686332   42145 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:18:11.686434   42145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:18:11.686494   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:18:11.686530   42145 ssh_runner.go:195] Run: cat /version.json
	I0916 11:18:11.686553   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:18:11.689100   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.689122   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.689549   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:18:11.689594   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.689623   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:18:11.689640   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:11.689720   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:18:11.689883   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:18:11.689894   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:18:11.690063   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:18:11.690065   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:18:11.690208   42145 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:18:11.690212   42145 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:18:11.690324   42145 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:18:11.806766   42145 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:18:11.807393   42145 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0916 11:18:11.807584   42145 ssh_runner.go:195] Run: systemctl --version
	I0916 11:18:11.819154   42145 command_runner.go:130] > systemd 252 (252)
	I0916 11:18:11.819216   42145 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 11:18:11.819347   42145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:18:11.980173   42145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:18:11.986158   42145 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 11:18:11.986220   42145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:18:11.986265   42145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:18:11.996110   42145 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:18:11.996128   42145 start.go:495] detecting cgroup driver to use...
	I0916 11:18:11.996186   42145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:18:12.013666   42145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:18:12.028247   42145 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:18:12.028309   42145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:18:12.042998   42145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:18:12.057864   42145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:18:12.198732   42145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:18:12.336962   42145 docker.go:233] disabling docker service ...
	I0916 11:18:12.337021   42145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:18:12.354271   42145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:18:12.369024   42145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:18:12.504366   42145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:18:12.646261   42145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:18:12.660385   42145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:18:12.679976   42145 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 11:18:12.680038   42145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:18:12.680093   42145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.691314   42145 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:18:12.691376   42145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.702898   42145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.714096   42145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.725663   42145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:18:12.736905   42145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.747928   42145 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.758890   42145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:18:12.770050   42145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:18:12.779457   42145 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 11:18:12.779537   42145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:18:12.789222   42145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:18:12.928692   42145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:18:24.056206   42145 ssh_runner.go:235] Completed: sudo systemctl restart crio: (11.127472721s)
	I0916 11:18:24.056239   42145 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:18:24.056317   42145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:18:24.062284   42145 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 11:18:24.062313   42145 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:18:24.062321   42145 command_runner.go:130] > Device: 0,22	Inode: 1888        Links: 1
	I0916 11:18:24.062332   42145 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:18:24.062342   42145 command_runner.go:130] > Access: 2024-09-16 11:18:21.816441138 +0000
	I0916 11:18:24.062351   42145 command_runner.go:130] > Modify: 2024-09-16 11:18:20.096370352 +0000
	I0916 11:18:24.062359   42145 command_runner.go:130] > Change: 2024-09-16 11:18:20.096370352 +0000
	I0916 11:18:24.062368   42145 command_runner.go:130] >  Birth: -
	I0916 11:18:24.062563   42145 start.go:563] Will wait 60s for crictl version
	I0916 11:18:24.062621   42145 ssh_runner.go:195] Run: which crictl
	I0916 11:18:24.066545   42145 command_runner.go:130] > /usr/bin/crictl
	I0916 11:18:24.066699   42145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:18:24.105301   42145 command_runner.go:130] > Version:  0.1.0
	I0916 11:18:24.105329   42145 command_runner.go:130] > RuntimeName:  cri-o
	I0916 11:18:24.105335   42145 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 11:18:24.105341   42145 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:18:24.105363   42145 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:18:24.105413   42145 ssh_runner.go:195] Run: crio --version
	I0916 11:18:24.133560   42145 command_runner.go:130] > crio version 1.29.1
	I0916 11:18:24.133586   42145 command_runner.go:130] > Version:        1.29.1
	I0916 11:18:24.133592   42145 command_runner.go:130] > GitCommit:      unknown
	I0916 11:18:24.133596   42145 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:18:24.133607   42145 command_runner.go:130] > GitTreeState:   clean
	I0916 11:18:24.133614   42145 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:18:24.133619   42145 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:18:24.133622   42145 command_runner.go:130] > Compiler:       gc
	I0916 11:18:24.133627   42145 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:18:24.133631   42145 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:18:24.133636   42145 command_runner.go:130] > BuildTags:      
	I0916 11:18:24.133640   42145 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:18:24.133644   42145 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:18:24.133648   42145 command_runner.go:130] >   btrfs_noversion
	I0916 11:18:24.133653   42145 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:18:24.133657   42145 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:18:24.133661   42145 command_runner.go:130] >   seccomp
	I0916 11:18:24.133668   42145 command_runner.go:130] > LDFlags:          unknown
	I0916 11:18:24.133672   42145 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:18:24.133679   42145 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:18:24.134808   42145 ssh_runner.go:195] Run: crio --version
	I0916 11:18:24.162438   42145 command_runner.go:130] > crio version 1.29.1
	I0916 11:18:24.162462   42145 command_runner.go:130] > Version:        1.29.1
	I0916 11:18:24.162468   42145 command_runner.go:130] > GitCommit:      unknown
	I0916 11:18:24.162472   42145 command_runner.go:130] > GitCommitDate:  unknown
	I0916 11:18:24.162475   42145 command_runner.go:130] > GitTreeState:   clean
	I0916 11:18:24.162481   42145 command_runner.go:130] > BuildDate:      2024-09-15T21:21:56Z
	I0916 11:18:24.162484   42145 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 11:18:24.162488   42145 command_runner.go:130] > Compiler:       gc
	I0916 11:18:24.162493   42145 command_runner.go:130] > Platform:       linux/amd64
	I0916 11:18:24.162497   42145 command_runner.go:130] > Linkmode:       dynamic
	I0916 11:18:24.162501   42145 command_runner.go:130] > BuildTags:      
	I0916 11:18:24.162506   42145 command_runner.go:130] >   containers_image_ostree_stub
	I0916 11:18:24.162510   42145 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 11:18:24.162526   42145 command_runner.go:130] >   btrfs_noversion
	I0916 11:18:24.162531   42145 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 11:18:24.162535   42145 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 11:18:24.162543   42145 command_runner.go:130] >   seccomp
	I0916 11:18:24.162548   42145 command_runner.go:130] > LDFlags:          unknown
	I0916 11:18:24.162553   42145 command_runner.go:130] > SeccompEnabled:   true
	I0916 11:18:24.162557   42145 command_runner.go:130] > AppArmorEnabled:  false
	I0916 11:18:24.165900   42145 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:18:24.167356   42145 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:18:24.169754   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:24.170093   42145 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:18:24.170122   42145 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:18:24.170291   42145 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:18:24.174608   42145 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 11:18:24.174777   42145 kubeadm.go:883] updating cluster {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fa
lse metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:18:24.174975   42145 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:18:24.175021   42145 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:18:24.217648   42145 command_runner.go:130] > {
	I0916 11:18:24.217673   42145 command_runner.go:130] >   "images": [
	I0916 11:18:24.217678   42145 command_runner.go:130] >     {
	I0916 11:18:24.217689   42145 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:18:24.217693   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.217699   42145 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:18:24.217703   42145 command_runner.go:130] >       ],
	I0916 11:18:24.217707   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.217717   42145 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:18:24.217725   42145 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:18:24.217731   42145 command_runner.go:130] >       ],
	I0916 11:18:24.217750   42145 command_runner.go:130] >       "size": "87190579",
	I0916 11:18:24.217757   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.217761   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.217767   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.217773   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.217776   42145 command_runner.go:130] >     },
	I0916 11:18:24.217780   42145 command_runner.go:130] >     {
	I0916 11:18:24.217790   42145 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:18:24.217796   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.217805   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:18:24.217811   42145 command_runner.go:130] >       ],
	I0916 11:18:24.217818   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.217829   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:18:24.217843   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:18:24.217858   42145 command_runner.go:130] >       ],
	I0916 11:18:24.217868   42145 command_runner.go:130] >       "size": "1363676",
	I0916 11:18:24.217874   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.217884   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.217893   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.217899   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.217905   42145 command_runner.go:130] >     },
	I0916 11:18:24.217913   42145 command_runner.go:130] >     {
	I0916 11:18:24.217921   42145 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:18:24.217927   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.217932   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:18:24.217938   42145 command_runner.go:130] >       ],
	I0916 11:18:24.217941   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.217949   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:18:24.217959   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:18:24.217965   42145 command_runner.go:130] >       ],
	I0916 11:18:24.217969   42145 command_runner.go:130] >       "size": "31470524",
	I0916 11:18:24.217973   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.217977   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.217989   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.217996   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.217999   42145 command_runner.go:130] >     },
	I0916 11:18:24.218003   42145 command_runner.go:130] >     {
	I0916 11:18:24.218009   42145 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:18:24.218014   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218019   42145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:18:24.218025   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218029   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218038   42145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:18:24.218052   42145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:18:24.218064   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218068   42145 command_runner.go:130] >       "size": "63273227",
	I0916 11:18:24.218072   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.218076   42145 command_runner.go:130] >       "username": "nonroot",
	I0916 11:18:24.218080   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218083   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.218087   42145 command_runner.go:130] >     },
	I0916 11:18:24.218090   42145 command_runner.go:130] >     {
	I0916 11:18:24.218096   42145 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:18:24.218101   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218106   42145 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:18:24.218110   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218115   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218123   42145 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:18:24.218130   42145 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:18:24.218135   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218140   42145 command_runner.go:130] >       "size": "149009664",
	I0916 11:18:24.218143   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.218148   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.218156   42145 command_runner.go:130] >       },
	I0916 11:18:24.218163   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.218170   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218185   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.218195   42145 command_runner.go:130] >     },
	I0916 11:18:24.218198   42145 command_runner.go:130] >     {
	I0916 11:18:24.218205   42145 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:18:24.218210   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218215   42145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:18:24.218221   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218225   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218234   42145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:18:24.218242   42145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:18:24.218246   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218252   42145 command_runner.go:130] >       "size": "95237600",
	I0916 11:18:24.218256   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.218262   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.218265   42145 command_runner.go:130] >       },
	I0916 11:18:24.218268   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.218275   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218278   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.218282   42145 command_runner.go:130] >     },
	I0916 11:18:24.218285   42145 command_runner.go:130] >     {
	I0916 11:18:24.218291   42145 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:18:24.218297   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218302   42145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:18:24.218306   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218311   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218320   42145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:18:24.218327   42145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:18:24.218333   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218337   42145 command_runner.go:130] >       "size": "89437508",
	I0916 11:18:24.218340   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.218344   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.218348   42145 command_runner.go:130] >       },
	I0916 11:18:24.218351   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.218360   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218366   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.218369   42145 command_runner.go:130] >     },
	I0916 11:18:24.218373   42145 command_runner.go:130] >     {
	I0916 11:18:24.218379   42145 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:18:24.218385   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218389   42145 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:18:24.218394   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218399   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218415   42145 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:18:24.218425   42145 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:18:24.218429   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218433   42145 command_runner.go:130] >       "size": "92733849",
	I0916 11:18:24.218436   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.218440   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.218443   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218447   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.218450   42145 command_runner.go:130] >     },
	I0916 11:18:24.218454   42145 command_runner.go:130] >     {
	I0916 11:18:24.218463   42145 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:18:24.218468   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218477   42145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:18:24.218481   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218488   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218498   42145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:18:24.218515   42145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:18:24.218520   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218526   42145 command_runner.go:130] >       "size": "68420934",
	I0916 11:18:24.218531   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.218537   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.218541   42145 command_runner.go:130] >       },
	I0916 11:18:24.218547   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.218556   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218573   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.218582   42145 command_runner.go:130] >     },
	I0916 11:18:24.218588   42145 command_runner.go:130] >     {
	I0916 11:18:24.218599   42145 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:18:24.218608   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.218615   42145 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:18:24.218620   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218628   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.218639   42145 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:18:24.218652   42145 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:18:24.218660   42145 command_runner.go:130] >       ],
	I0916 11:18:24.218666   42145 command_runner.go:130] >       "size": "742080",
	I0916 11:18:24.218675   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.218682   42145 command_runner.go:130] >         "value": "65535"
	I0916 11:18:24.218690   42145 command_runner.go:130] >       },
	I0916 11:18:24.218697   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.218708   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.218712   42145 command_runner.go:130] >       "pinned": true
	I0916 11:18:24.218715   42145 command_runner.go:130] >     }
	I0916 11:18:24.218719   42145 command_runner.go:130] >   ]
	I0916 11:18:24.218724   42145 command_runner.go:130] > }
	I0916 11:18:24.219112   42145 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:18:24.219130   42145 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:18:24.219181   42145 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:18:24.252251   42145 command_runner.go:130] > {
	I0916 11:18:24.252281   42145 command_runner.go:130] >   "images": [
	I0916 11:18:24.252288   42145 command_runner.go:130] >     {
	I0916 11:18:24.252304   42145 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:18:24.252311   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.252321   42145 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:18:24.252327   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252342   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.252356   42145 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 11:18:24.252374   42145 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:18:24.252377   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252382   42145 command_runner.go:130] >       "size": "87190579",
	I0916 11:18:24.252391   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.252398   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.252405   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.252431   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.252441   42145 command_runner.go:130] >     },
	I0916 11:18:24.252447   42145 command_runner.go:130] >     {
	I0916 11:18:24.252456   42145 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:18:24.252464   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.252475   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:18:24.252481   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252490   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.252502   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 11:18:24.252513   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:18:24.252518   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252522   42145 command_runner.go:130] >       "size": "1363676",
	I0916 11:18:24.252529   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.252538   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.252547   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.252555   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.252564   42145 command_runner.go:130] >     },
	I0916 11:18:24.252569   42145 command_runner.go:130] >     {
	I0916 11:18:24.252582   42145 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:18:24.252591   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.252600   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:18:24.252608   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252615   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.252625   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 11:18:24.252637   42145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 11:18:24.252645   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252652   42145 command_runner.go:130] >       "size": "31470524",
	I0916 11:18:24.252660   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.252666   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.252675   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.252686   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.252695   42145 command_runner.go:130] >     },
	I0916 11:18:24.252701   42145 command_runner.go:130] >     {
	I0916 11:18:24.252714   42145 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:18:24.252723   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.252731   42145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:18:24.252739   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252746   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.252761   42145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 11:18:24.252780   42145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 11:18:24.252789   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252795   42145 command_runner.go:130] >       "size": "63273227",
	I0916 11:18:24.252803   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.252811   42145 command_runner.go:130] >       "username": "nonroot",
	I0916 11:18:24.252819   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.252835   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.252843   42145 command_runner.go:130] >     },
	I0916 11:18:24.252848   42145 command_runner.go:130] >     {
	I0916 11:18:24.252858   42145 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:18:24.252868   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.252876   42145 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:18:24.252885   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252901   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.252914   42145 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 11:18:24.252926   42145 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:18:24.252933   42145 command_runner.go:130] >       ],
	I0916 11:18:24.252940   42145 command_runner.go:130] >       "size": "149009664",
	I0916 11:18:24.252946   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.252950   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.252953   42145 command_runner.go:130] >       },
	I0916 11:18:24.252958   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.252965   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.252969   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.252975   42145 command_runner.go:130] >     },
	I0916 11:18:24.252980   42145 command_runner.go:130] >     {
	I0916 11:18:24.252987   42145 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:18:24.252992   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.252997   42145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:18:24.253002   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253006   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.253016   42145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 11:18:24.253023   42145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:18:24.253029   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253035   42145 command_runner.go:130] >       "size": "95237600",
	I0916 11:18:24.253043   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.253049   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.253057   42145 command_runner.go:130] >       },
	I0916 11:18:24.253065   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.253074   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.253080   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.253089   42145 command_runner.go:130] >     },
	I0916 11:18:24.253094   42145 command_runner.go:130] >     {
	I0916 11:18:24.253105   42145 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:18:24.253114   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.253122   42145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:18:24.253142   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253149   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.253164   42145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 11:18:24.253177   42145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 11:18:24.253185   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253200   42145 command_runner.go:130] >       "size": "89437508",
	I0916 11:18:24.253210   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.253216   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.253224   42145 command_runner.go:130] >       },
	I0916 11:18:24.253230   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.253239   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.253249   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.253258   42145 command_runner.go:130] >     },
	I0916 11:18:24.253264   42145 command_runner.go:130] >     {
	I0916 11:18:24.253279   42145 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:18:24.253287   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.253295   42145 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:18:24.253301   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253305   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.253321   42145 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 11:18:24.253330   42145 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 11:18:24.253334   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253338   42145 command_runner.go:130] >       "size": "92733849",
	I0916 11:18:24.253342   42145 command_runner.go:130] >       "uid": null,
	I0916 11:18:24.253346   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.253350   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.253353   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.253357   42145 command_runner.go:130] >     },
	I0916 11:18:24.253361   42145 command_runner.go:130] >     {
	I0916 11:18:24.253367   42145 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:18:24.253373   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.253378   42145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:18:24.253384   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253388   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.253394   42145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 11:18:24.253403   42145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 11:18:24.253407   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253415   42145 command_runner.go:130] >       "size": "68420934",
	I0916 11:18:24.253418   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.253424   42145 command_runner.go:130] >         "value": "0"
	I0916 11:18:24.253428   42145 command_runner.go:130] >       },
	I0916 11:18:24.253434   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.253438   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.253445   42145 command_runner.go:130] >       "pinned": false
	I0916 11:18:24.253452   42145 command_runner.go:130] >     },
	I0916 11:18:24.253460   42145 command_runner.go:130] >     {
	I0916 11:18:24.253470   42145 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:18:24.253479   42145 command_runner.go:130] >       "repoTags": [
	I0916 11:18:24.253485   42145 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:18:24.253493   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253499   42145 command_runner.go:130] >       "repoDigests": [
	I0916 11:18:24.253513   42145 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 11:18:24.253531   42145 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:18:24.253541   42145 command_runner.go:130] >       ],
	I0916 11:18:24.253548   42145 command_runner.go:130] >       "size": "742080",
	I0916 11:18:24.253556   42145 command_runner.go:130] >       "uid": {
	I0916 11:18:24.253566   42145 command_runner.go:130] >         "value": "65535"
	I0916 11:18:24.253572   42145 command_runner.go:130] >       },
	I0916 11:18:24.253577   42145 command_runner.go:130] >       "username": "",
	I0916 11:18:24.253583   42145 command_runner.go:130] >       "spec": null,
	I0916 11:18:24.253587   42145 command_runner.go:130] >       "pinned": true
	I0916 11:18:24.253590   42145 command_runner.go:130] >     }
	I0916 11:18:24.253595   42145 command_runner.go:130] >   ]
	I0916 11:18:24.253599   42145 command_runner.go:130] > }
	I0916 11:18:24.253716   42145 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:18:24.253727   42145 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:18:24.253734   42145 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 11:18:24.253865   42145 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-736061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:18:24.253949   42145 ssh_runner.go:195] Run: crio config
	I0916 11:18:24.296141   42145 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 11:18:24.296174   42145 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 11:18:24.296185   42145 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 11:18:24.296190   42145 command_runner.go:130] > #
	I0916 11:18:24.296202   42145 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 11:18:24.296212   42145 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 11:18:24.296223   42145 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 11:18:24.296240   42145 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 11:18:24.296250   42145 command_runner.go:130] > # reload'.
	I0916 11:18:24.296262   42145 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 11:18:24.296274   42145 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 11:18:24.296285   42145 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 11:18:24.296297   42145 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 11:18:24.296305   42145 command_runner.go:130] > [crio]
	I0916 11:18:24.296314   42145 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 11:18:24.296326   42145 command_runner.go:130] > # containers images, in this directory.
	I0916 11:18:24.296335   42145 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 11:18:24.296352   42145 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 11:18:24.296378   42145 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 11:18:24.296394   42145 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 11:18:24.296402   42145 command_runner.go:130] > # imagestore = ""
	I0916 11:18:24.296415   42145 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 11:18:24.296428   42145 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 11:18:24.296437   42145 command_runner.go:130] > storage_driver = "overlay"
	I0916 11:18:24.296447   42145 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 11:18:24.296460   42145 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 11:18:24.296471   42145 command_runner.go:130] > storage_option = [
	I0916 11:18:24.296479   42145 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 11:18:24.296487   42145 command_runner.go:130] > ]
	I0916 11:18:24.296496   42145 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 11:18:24.296509   42145 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 11:18:24.296520   42145 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 11:18:24.296530   42145 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 11:18:24.296543   42145 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 11:18:24.296553   42145 command_runner.go:130] > # always happen on a node reboot
	I0916 11:18:24.296561   42145 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 11:18:24.296579   42145 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 11:18:24.296592   42145 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 11:18:24.296600   42145 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 11:18:24.296611   42145 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 11:18:24.296627   42145 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 11:18:24.296640   42145 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 11:18:24.296646   42145 command_runner.go:130] > # internal_wipe = true
	I0916 11:18:24.296658   42145 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 11:18:24.296670   42145 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 11:18:24.296677   42145 command_runner.go:130] > # internal_repair = false
	I0916 11:18:24.296687   42145 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 11:18:24.296700   42145 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 11:18:24.296713   42145 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 11:18:24.296725   42145 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 11:18:24.296738   42145 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 11:18:24.296747   42145 command_runner.go:130] > [crio.api]
	I0916 11:18:24.296758   42145 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 11:18:24.296767   42145 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 11:18:24.296779   42145 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 11:18:24.296794   42145 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 11:18:24.296808   42145 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 11:18:24.296818   42145 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 11:18:24.296824   42145 command_runner.go:130] > # stream_port = "0"
	I0916 11:18:24.296837   42145 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 11:18:24.296851   42145 command_runner.go:130] > # stream_enable_tls = false
	I0916 11:18:24.296863   42145 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 11:18:24.296874   42145 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 11:18:24.296887   42145 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 11:18:24.296906   42145 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 11:18:24.296915   42145 command_runner.go:130] > # minutes.
	I0916 11:18:24.296923   42145 command_runner.go:130] > # stream_tls_cert = ""
	I0916 11:18:24.296934   42145 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 11:18:24.296943   42145 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 11:18:24.296954   42145 command_runner.go:130] > # stream_tls_key = ""
	I0916 11:18:24.296963   42145 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 11:18:24.296972   42145 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 11:18:24.296991   42145 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 11:18:24.297002   42145 command_runner.go:130] > # stream_tls_ca = ""
	I0916 11:18:24.297015   42145 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:18:24.297025   42145 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 11:18:24.297037   42145 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 11:18:24.297049   42145 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 11:18:24.297062   42145 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 11:18:24.297074   42145 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 11:18:24.297084   42145 command_runner.go:130] > [crio.runtime]
	I0916 11:18:24.297094   42145 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 11:18:24.297107   42145 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 11:18:24.297117   42145 command_runner.go:130] > # "nofile=1024:2048"
	I0916 11:18:24.297140   42145 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 11:18:24.297151   42145 command_runner.go:130] > # default_ulimits = [
	I0916 11:18:24.297157   42145 command_runner.go:130] > # ]
	I0916 11:18:24.297165   42145 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 11:18:24.297174   42145 command_runner.go:130] > # no_pivot = false
	I0916 11:18:24.297184   42145 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 11:18:24.297198   42145 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 11:18:24.297209   42145 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 11:18:24.297225   42145 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 11:18:24.297243   42145 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 11:18:24.297257   42145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:18:24.297267   42145 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 11:18:24.297278   42145 command_runner.go:130] > # Cgroup setting for conmon
	I0916 11:18:24.297291   42145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 11:18:24.297300   42145 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 11:18:24.297312   42145 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 11:18:24.297323   42145 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 11:18:24.297337   42145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 11:18:24.297346   42145 command_runner.go:130] > conmon_env = [
	I0916 11:18:24.297356   42145 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:18:24.297365   42145 command_runner.go:130] > ]
	I0916 11:18:24.297374   42145 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 11:18:24.297384   42145 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 11:18:24.297393   42145 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 11:18:24.297402   42145 command_runner.go:130] > # default_env = [
	I0916 11:18:24.297408   42145 command_runner.go:130] > # ]
	I0916 11:18:24.297419   42145 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 11:18:24.297435   42145 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 11:18:24.297445   42145 command_runner.go:130] > # selinux = false
	I0916 11:18:24.297456   42145 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 11:18:24.297470   42145 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 11:18:24.297483   42145 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 11:18:24.297493   42145 command_runner.go:130] > # seccomp_profile = ""
	I0916 11:18:24.297504   42145 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 11:18:24.297516   42145 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 11:18:24.297528   42145 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 11:18:24.297537   42145 command_runner.go:130] > # which might increase security.
	I0916 11:18:24.297545   42145 command_runner.go:130] > # This option is currently deprecated,
	I0916 11:18:24.297558   42145 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 11:18:24.297570   42145 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 11:18:24.297583   42145 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 11:18:24.297597   42145 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 11:18:24.297609   42145 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 11:18:24.297622   42145 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 11:18:24.297634   42145 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:18:24.297641   42145 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 11:18:24.297659   42145 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 11:18:24.297670   42145 command_runner.go:130] > # the cgroup blockio controller.
	I0916 11:18:24.297680   42145 command_runner.go:130] > # blockio_config_file = ""
	I0916 11:18:24.297689   42145 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 11:18:24.297699   42145 command_runner.go:130] > # blockio parameters.
	I0916 11:18:24.297707   42145 command_runner.go:130] > # blockio_reload = false
	I0916 11:18:24.297723   42145 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 11:18:24.297731   42145 command_runner.go:130] > # irqbalance daemon.
	I0916 11:18:24.297739   42145 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 11:18:24.297752   42145 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 11:18:24.297766   42145 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 11:18:24.297780   42145 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 11:18:24.297792   42145 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 11:18:24.297805   42145 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 11:18:24.297815   42145 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:18:24.297822   42145 command_runner.go:130] > # rdt_config_file = ""
	I0916 11:18:24.297831   42145 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 11:18:24.297842   42145 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 11:18:24.297869   42145 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 11:18:24.297880   42145 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 11:18:24.297897   42145 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 11:18:24.297910   42145 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 11:18:24.297919   42145 command_runner.go:130] > # will be added.
	I0916 11:18:24.297925   42145 command_runner.go:130] > # default_capabilities = [
	I0916 11:18:24.297934   42145 command_runner.go:130] > # 	"CHOWN",
	I0916 11:18:24.297941   42145 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 11:18:24.297950   42145 command_runner.go:130] > # 	"FSETID",
	I0916 11:18:24.297957   42145 command_runner.go:130] > # 	"FOWNER",
	I0916 11:18:24.297968   42145 command_runner.go:130] > # 	"SETGID",
	I0916 11:18:24.297978   42145 command_runner.go:130] > # 	"SETUID",
	I0916 11:18:24.297985   42145 command_runner.go:130] > # 	"SETPCAP",
	I0916 11:18:24.297995   42145 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 11:18:24.298000   42145 command_runner.go:130] > # 	"KILL",
	I0916 11:18:24.298007   42145 command_runner.go:130] > # ]
	I0916 11:18:24.298019   42145 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 11:18:24.298033   42145 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 11:18:24.298044   42145 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 11:18:24.298058   42145 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 11:18:24.298071   42145 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:18:24.298081   42145 command_runner.go:130] > default_sysctls = [
	I0916 11:18:24.298092   42145 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 11:18:24.298105   42145 command_runner.go:130] > ]
	I0916 11:18:24.298113   42145 command_runner.go:130] > # List of devices on the host that a
	I0916 11:18:24.298125   42145 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 11:18:24.298134   42145 command_runner.go:130] > # allowed_devices = [
	I0916 11:18:24.298140   42145 command_runner.go:130] > # 	"/dev/fuse",
	I0916 11:18:24.298148   42145 command_runner.go:130] > # ]
	I0916 11:18:24.298155   42145 command_runner.go:130] > # List of additional devices. specified as
	I0916 11:18:24.298171   42145 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 11:18:24.298183   42145 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 11:18:24.298195   42145 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 11:18:24.298205   42145 command_runner.go:130] > # additional_devices = [
	I0916 11:18:24.298211   42145 command_runner.go:130] > # ]
	I0916 11:18:24.298220   42145 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 11:18:24.298229   42145 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 11:18:24.298236   42145 command_runner.go:130] > # 	"/etc/cdi",
	I0916 11:18:24.298244   42145 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 11:18:24.298250   42145 command_runner.go:130] > # ]
	I0916 11:18:24.298262   42145 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 11:18:24.298275   42145 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 11:18:24.298284   42145 command_runner.go:130] > # Defaults to false.
	I0916 11:18:24.298297   42145 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 11:18:24.298311   42145 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 11:18:24.298321   42145 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 11:18:24.298330   42145 command_runner.go:130] > # hooks_dir = [
	I0916 11:18:24.298337   42145 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 11:18:24.298345   42145 command_runner.go:130] > # ]
	I0916 11:18:24.298355   42145 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 11:18:24.298369   42145 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 11:18:24.298381   42145 command_runner.go:130] > # its default mounts from the following two files:
	I0916 11:18:24.298390   42145 command_runner.go:130] > #
	I0916 11:18:24.298403   42145 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 11:18:24.298412   42145 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 11:18:24.298422   42145 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 11:18:24.298429   42145 command_runner.go:130] > #
	I0916 11:18:24.298438   42145 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 11:18:24.298453   42145 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 11:18:24.298466   42145 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 11:18:24.298476   42145 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 11:18:24.298481   42145 command_runner.go:130] > #
	I0916 11:18:24.298490   42145 command_runner.go:130] > # default_mounts_file = ""
	I0916 11:18:24.298498   42145 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 11:18:24.298511   42145 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 11:18:24.298527   42145 command_runner.go:130] > pids_limit = 1024
	I0916 11:18:24.298541   42145 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 11:18:24.298553   42145 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 11:18:24.298565   42145 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 11:18:24.298581   42145 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 11:18:24.298590   42145 command_runner.go:130] > # log_size_max = -1
	I0916 11:18:24.298601   42145 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 11:18:24.298611   42145 command_runner.go:130] > # log_to_journald = false
	I0916 11:18:24.298621   42145 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 11:18:24.298632   42145 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 11:18:24.298645   42145 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 11:18:24.298658   42145 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 11:18:24.298669   42145 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 11:18:24.298678   42145 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 11:18:24.298690   42145 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 11:18:24.298699   42145 command_runner.go:130] > # read_only = false
	I0916 11:18:24.298710   42145 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 11:18:24.298722   42145 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 11:18:24.298733   42145 command_runner.go:130] > # live configuration reload.
	I0916 11:18:24.298742   42145 command_runner.go:130] > # log_level = "info"
	I0916 11:18:24.298753   42145 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 11:18:24.298764   42145 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:18:24.298773   42145 command_runner.go:130] > # log_filter = ""
	I0916 11:18:24.298785   42145 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 11:18:24.298797   42145 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 11:18:24.298805   42145 command_runner.go:130] > # separated by comma.
	I0916 11:18:24.298817   42145 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:18:24.298826   42145 command_runner.go:130] > # uid_mappings = ""
	I0916 11:18:24.298837   42145 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 11:18:24.298849   42145 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 11:18:24.298858   42145 command_runner.go:130] > # separated by comma.
	I0916 11:18:24.298873   42145 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:18:24.298883   42145 command_runner.go:130] > # gid_mappings = ""
	I0916 11:18:24.298901   42145 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 11:18:24.298914   42145 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:18:24.298926   42145 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:18:24.298946   42145 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:18:24.298955   42145 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 11:18:24.298966   42145 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 11:18:24.298978   42145 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 11:18:24.298990   42145 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 11:18:24.299006   42145 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 11:18:24.299015   42145 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 11:18:24.299027   42145 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 11:18:24.299041   42145 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 11:18:24.299053   42145 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 11:18:24.299064   42145 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 11:18:24.299076   42145 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 11:18:24.299089   42145 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 11:18:24.299099   42145 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 11:18:24.299109   42145 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 11:18:24.299117   42145 command_runner.go:130] > drop_infra_ctr = false
	I0916 11:18:24.299126   42145 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 11:18:24.299136   42145 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 11:18:24.299149   42145 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 11:18:24.299157   42145 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 11:18:24.299170   42145 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 11:18:24.299182   42145 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 11:18:24.299193   42145 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 11:18:24.299205   42145 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 11:18:24.299212   42145 command_runner.go:130] > # shared_cpuset = ""
	I0916 11:18:24.299224   42145 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 11:18:24.299234   42145 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 11:18:24.299245   42145 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 11:18:24.299258   42145 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 11:18:24.299269   42145 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 11:18:24.299280   42145 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 11:18:24.299293   42145 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 11:18:24.299303   42145 command_runner.go:130] > # enable_criu_support = false
	I0916 11:18:24.299314   42145 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 11:18:24.299325   42145 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 11:18:24.299338   42145 command_runner.go:130] > # enable_pod_events = false
	I0916 11:18:24.299353   42145 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:18:24.299364   42145 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 11:18:24.299374   42145 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 11:18:24.299381   42145 command_runner.go:130] > # default_runtime = "runc"
	I0916 11:18:24.299391   42145 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 11:18:24.299405   42145 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 11:18:24.299423   42145 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 11:18:24.299432   42145 command_runner.go:130] > # creation as a file is not desired either.
	I0916 11:18:24.299446   42145 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 11:18:24.299456   42145 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 11:18:24.299462   42145 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 11:18:24.299469   42145 command_runner.go:130] > # ]
	I0916 11:18:24.299478   42145 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 11:18:24.299490   42145 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 11:18:24.299501   42145 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 11:18:24.299511   42145 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 11:18:24.299518   42145 command_runner.go:130] > #
	I0916 11:18:24.299525   42145 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 11:18:24.299535   42145 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 11:18:24.299595   42145 command_runner.go:130] > # runtime_type = "oci"
	I0916 11:18:24.299610   42145 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 11:18:24.299620   42145 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 11:18:24.299629   42145 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 11:18:24.299636   42145 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 11:18:24.299644   42145 command_runner.go:130] > # monitor_env = []
	I0916 11:18:24.299654   42145 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 11:18:24.299664   42145 command_runner.go:130] > # allowed_annotations = []
	I0916 11:18:24.299675   42145 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 11:18:24.299683   42145 command_runner.go:130] > # Where:
	I0916 11:18:24.299694   42145 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 11:18:24.299706   42145 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 11:18:24.299717   42145 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 11:18:24.299726   42145 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 11:18:24.299738   42145 command_runner.go:130] > #   in $PATH.
	I0916 11:18:24.299746   42145 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 11:18:24.299756   42145 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 11:18:24.299768   42145 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 11:18:24.299781   42145 command_runner.go:130] > #   state.
	I0916 11:18:24.299795   42145 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 11:18:24.299806   42145 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 11:18:24.299818   42145 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 11:18:24.299831   42145 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 11:18:24.299843   42145 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 11:18:24.299856   42145 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 11:18:24.299867   42145 command_runner.go:130] > #   The currently recognized values are:
	I0916 11:18:24.299880   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 11:18:24.299898   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 11:18:24.299909   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 11:18:24.299919   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 11:18:24.299932   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 11:18:24.299943   42145 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 11:18:24.299957   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 11:18:24.299970   42145 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 11:18:24.299983   42145 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 11:18:24.299994   42145 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 11:18:24.300002   42145 command_runner.go:130] > #   deprecated option "conmon".
	I0916 11:18:24.300014   42145 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 11:18:24.300024   42145 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 11:18:24.300036   42145 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 11:18:24.300047   42145 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 11:18:24.300060   42145 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 11:18:24.300071   42145 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 11:18:24.300084   42145 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 11:18:24.300093   42145 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 11:18:24.300101   42145 command_runner.go:130] > #
	I0916 11:18:24.300107   42145 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 11:18:24.300114   42145 command_runner.go:130] > #
	I0916 11:18:24.300123   42145 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 11:18:24.300134   42145 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 11:18:24.300142   42145 command_runner.go:130] > #
	I0916 11:18:24.300152   42145 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 11:18:24.300169   42145 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 11:18:24.300176   42145 command_runner.go:130] > #
	I0916 11:18:24.300186   42145 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 11:18:24.300193   42145 command_runner.go:130] > # feature.
	I0916 11:18:24.300198   42145 command_runner.go:130] > #
	I0916 11:18:24.300209   42145 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 11:18:24.300220   42145 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 11:18:24.300235   42145 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 11:18:24.300246   42145 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 11:18:24.300256   42145 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 11:18:24.300264   42145 command_runner.go:130] > #
	I0916 11:18:24.300272   42145 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 11:18:24.300283   42145 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 11:18:24.300290   42145 command_runner.go:130] > #
	I0916 11:18:24.300299   42145 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 11:18:24.300311   42145 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 11:18:24.300319   42145 command_runner.go:130] > #
	I0916 11:18:24.300332   42145 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 11:18:24.300344   42145 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 11:18:24.300353   42145 command_runner.go:130] > # limitation.
	I0916 11:18:24.300363   42145 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 11:18:24.300372   42145 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 11:18:24.300377   42145 command_runner.go:130] > runtime_type = "oci"
	I0916 11:18:24.300381   42145 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 11:18:24.300385   42145 command_runner.go:130] > runtime_config_path = ""
	I0916 11:18:24.300390   42145 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 11:18:24.300396   42145 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 11:18:24.300400   42145 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 11:18:24.300405   42145 command_runner.go:130] > monitor_env = [
	I0916 11:18:24.300411   42145 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 11:18:24.300416   42145 command_runner.go:130] > ]
	I0916 11:18:24.300420   42145 command_runner.go:130] > privileged_without_host_devices = false
	I0916 11:18:24.300428   42145 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 11:18:24.300436   42145 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 11:18:24.300442   42145 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 11:18:24.300455   42145 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 11:18:24.300469   42145 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 11:18:24.300480   42145 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 11:18:24.300496   42145 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 11:18:24.300517   42145 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 11:18:24.300529   42145 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 11:18:24.300542   42145 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 11:18:24.300550   42145 command_runner.go:130] > # Example:
	I0916 11:18:24.300558   42145 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 11:18:24.300568   42145 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 11:18:24.300579   42145 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 11:18:24.300590   42145 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 11:18:24.300596   42145 command_runner.go:130] > # cpuset = 0
	I0916 11:18:24.300605   42145 command_runner.go:130] > # cpushares = "0-1"
	I0916 11:18:24.300611   42145 command_runner.go:130] > # Where:
	I0916 11:18:24.300621   42145 command_runner.go:130] > # The workload name is workload-type.
	I0916 11:18:24.300635   42145 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 11:18:24.300646   42145 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 11:18:24.300655   42145 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 11:18:24.300665   42145 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 11:18:24.300673   42145 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 11:18:24.300678   42145 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 11:18:24.300686   42145 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 11:18:24.300693   42145 command_runner.go:130] > # Default value is set to true
	I0916 11:18:24.300698   42145 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 11:18:24.300705   42145 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 11:18:24.300709   42145 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 11:18:24.300716   42145 command_runner.go:130] > # Default value is set to 'false'
	I0916 11:18:24.300720   42145 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 11:18:24.300726   42145 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 11:18:24.300729   42145 command_runner.go:130] > #
	I0916 11:18:24.300736   42145 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 11:18:24.300742   42145 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 11:18:24.300747   42145 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 11:18:24.300753   42145 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 11:18:24.300758   42145 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 11:18:24.300761   42145 command_runner.go:130] > [crio.image]
	I0916 11:18:24.300766   42145 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 11:18:24.300770   42145 command_runner.go:130] > # default_transport = "docker://"
	I0916 11:18:24.300776   42145 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 11:18:24.300785   42145 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:18:24.300790   42145 command_runner.go:130] > # global_auth_file = ""
	I0916 11:18:24.300797   42145 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 11:18:24.300803   42145 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:18:24.300810   42145 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 11:18:24.300820   42145 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 11:18:24.300828   42145 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 11:18:24.300839   42145 command_runner.go:130] > # This option supports live configuration reload.
	I0916 11:18:24.300845   42145 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 11:18:24.300853   42145 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 11:18:24.300862   42145 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 11:18:24.300870   42145 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 11:18:24.300879   42145 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 11:18:24.300886   42145 command_runner.go:130] > # pause_command = "/pause"
	I0916 11:18:24.300901   42145 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 11:18:24.300914   42145 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 11:18:24.300925   42145 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 11:18:24.300932   42145 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 11:18:24.300940   42145 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 11:18:24.300949   42145 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 11:18:24.300955   42145 command_runner.go:130] > # pinned_images = [
	I0916 11:18:24.300958   42145 command_runner.go:130] > # ]
	I0916 11:18:24.300966   42145 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 11:18:24.300973   42145 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 11:18:24.300982   42145 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 11:18:24.300990   42145 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 11:18:24.300997   42145 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 11:18:24.301003   42145 command_runner.go:130] > # signature_policy = ""
	I0916 11:18:24.301008   42145 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 11:18:24.301017   42145 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 11:18:24.301025   42145 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 11:18:24.301031   42145 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 11:18:24.301042   42145 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 11:18:24.301047   42145 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 11:18:24.301055   42145 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 11:18:24.301065   42145 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 11:18:24.301070   42145 command_runner.go:130] > # changing them here.
	I0916 11:18:24.301075   42145 command_runner.go:130] > # insecure_registries = [
	I0916 11:18:24.301080   42145 command_runner.go:130] > # ]
	I0916 11:18:24.301086   42145 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 11:18:24.301093   42145 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 11:18:24.301097   42145 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 11:18:24.301104   42145 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 11:18:24.301109   42145 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 11:18:24.301117   42145 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 11:18:24.301123   42145 command_runner.go:130] > # CNI plugins.
	I0916 11:18:24.301144   42145 command_runner.go:130] > [crio.network]
	I0916 11:18:24.301156   42145 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 11:18:24.301162   42145 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 11:18:24.301168   42145 command_runner.go:130] > # cni_default_network = ""
	I0916 11:18:24.301174   42145 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 11:18:24.301180   42145 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 11:18:24.301186   42145 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 11:18:24.301192   42145 command_runner.go:130] > # plugin_dirs = [
	I0916 11:18:24.301196   42145 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 11:18:24.301201   42145 command_runner.go:130] > # ]
	I0916 11:18:24.301207   42145 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 11:18:24.301214   42145 command_runner.go:130] > [crio.metrics]
	I0916 11:18:24.301219   42145 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 11:18:24.301225   42145 command_runner.go:130] > enable_metrics = true
	I0916 11:18:24.301229   42145 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 11:18:24.301234   42145 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 11:18:24.301242   42145 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 11:18:24.301250   42145 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 11:18:24.301257   42145 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 11:18:24.301261   42145 command_runner.go:130] > # metrics_collectors = [
	I0916 11:18:24.301267   42145 command_runner.go:130] > # 	"operations",
	I0916 11:18:24.301272   42145 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 11:18:24.301280   42145 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 11:18:24.301286   42145 command_runner.go:130] > # 	"operations_errors",
	I0916 11:18:24.301290   42145 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 11:18:24.301296   42145 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 11:18:24.301300   42145 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 11:18:24.301306   42145 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 11:18:24.301311   42145 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 11:18:24.301317   42145 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 11:18:24.301321   42145 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 11:18:24.301327   42145 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 11:18:24.301331   42145 command_runner.go:130] > # 	"containers_oom_total",
	I0916 11:18:24.301340   42145 command_runner.go:130] > # 	"containers_oom",
	I0916 11:18:24.301344   42145 command_runner.go:130] > # 	"processes_defunct",
	I0916 11:18:24.301348   42145 command_runner.go:130] > # 	"operations_total",
	I0916 11:18:24.301354   42145 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 11:18:24.301359   42145 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 11:18:24.301365   42145 command_runner.go:130] > # 	"operations_errors_total",
	I0916 11:18:24.301369   42145 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 11:18:24.301379   42145 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 11:18:24.301384   42145 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 11:18:24.301390   42145 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 11:18:24.301394   42145 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 11:18:24.301400   42145 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 11:18:24.301407   42145 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 11:18:24.301412   42145 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 11:18:24.301417   42145 command_runner.go:130] > # ]
	I0916 11:18:24.301422   42145 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 11:18:24.301428   42145 command_runner.go:130] > # metrics_port = 9090
	I0916 11:18:24.301434   42145 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 11:18:24.301439   42145 command_runner.go:130] > # metrics_socket = ""
	I0916 11:18:24.301445   42145 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 11:18:24.301454   42145 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 11:18:24.301461   42145 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 11:18:24.301468   42145 command_runner.go:130] > # certificate on any modification event.
	I0916 11:18:24.301472   42145 command_runner.go:130] > # metrics_cert = ""
	I0916 11:18:24.301479   42145 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 11:18:24.301484   42145 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 11:18:24.301490   42145 command_runner.go:130] > # metrics_key = ""
	I0916 11:18:24.301495   42145 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 11:18:24.301501   42145 command_runner.go:130] > [crio.tracing]
	I0916 11:18:24.301507   42145 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 11:18:24.301513   42145 command_runner.go:130] > # enable_tracing = false
	I0916 11:18:24.301519   42145 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 11:18:24.301525   42145 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 11:18:24.301531   42145 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 11:18:24.301538   42145 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 11:18:24.301542   42145 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 11:18:24.301547   42145 command_runner.go:130] > [crio.nri]
	I0916 11:18:24.301551   42145 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 11:18:24.301557   42145 command_runner.go:130] > # enable_nri = false
	I0916 11:18:24.301561   42145 command_runner.go:130] > # NRI socket to listen on.
	I0916 11:18:24.301567   42145 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 11:18:24.301572   42145 command_runner.go:130] > # NRI plugin directory to use.
	I0916 11:18:24.301577   42145 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 11:18:24.301584   42145 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 11:18:24.301591   42145 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 11:18:24.301599   42145 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 11:18:24.301603   42145 command_runner.go:130] > # nri_disable_connections = false
	I0916 11:18:24.301608   42145 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 11:18:24.301614   42145 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 11:18:24.301620   42145 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 11:18:24.301626   42145 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 11:18:24.301632   42145 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 11:18:24.301637   42145 command_runner.go:130] > [crio.stats]
	I0916 11:18:24.301645   42145 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 11:18:24.301652   42145 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 11:18:24.301659   42145 command_runner.go:130] > # stats_collection_period = 0
	I0916 11:18:24.301679   42145 command_runner.go:130] ! time="2024-09-16 11:18:24.258275883Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 11:18:24.301693   42145 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 11:18:24.301762   42145 cni.go:84] Creating CNI manager for ""
	I0916 11:18:24.301773   42145 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0916 11:18:24.301788   42145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:18:24.301816   42145 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-736061 NodeName:multinode-736061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:18:24.301977   42145 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-736061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:18:24.302044   42145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:18:24.312660   42145 command_runner.go:130] > kubeadm
	I0916 11:18:24.312686   42145 command_runner.go:130] > kubectl
	I0916 11:18:24.312692   42145 command_runner.go:130] > kubelet
	I0916 11:18:24.312711   42145 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:18:24.312767   42145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:18:24.322285   42145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:18:24.339367   42145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:18:24.356325   42145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 11:18:24.373415   42145 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 11:18:24.377536   42145 command_runner.go:130] > 192.168.39.32	control-plane.minikube.internal
	I0916 11:18:24.377711   42145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:18:24.513348   42145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:18:24.528490   42145 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061 for IP: 192.168.39.32
	I0916 11:18:24.528520   42145 certs.go:194] generating shared ca certs ...
	I0916 11:18:24.528541   42145 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:18:24.528723   42145 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:18:24.528794   42145 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:18:24.528812   42145 certs.go:256] generating profile certs ...
	I0916 11:18:24.528965   42145 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/client.key
	I0916 11:18:24.529015   42145 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key.7afb17c7
	I0916 11:18:24.529050   42145 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key
	I0916 11:18:24.529060   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:18:24.529072   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:18:24.529088   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:18:24.529101   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:18:24.529110   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:18:24.529141   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:18:24.529163   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:18:24.529178   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:18:24.529235   42145 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:18:24.529262   42145 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:18:24.529271   42145 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:18:24.529293   42145 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:18:24.529316   42145 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:18:24.529336   42145 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:18:24.529372   42145 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:18:24.529399   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> /usr/share/ca-certificates/112032.pem
	I0916 11:18:24.529412   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:18:24.529428   42145 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem -> /usr/share/ca-certificates/11203.pem
	I0916 11:18:24.530119   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:18:24.583336   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:18:24.607935   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:18:24.632896   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:18:24.657218   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:18:24.682162   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:18:24.706983   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:18:24.731584   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/multinode-736061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:18:24.756189   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:18:24.780460   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:18:24.804275   42145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:18:24.829220   42145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:18:24.846726   42145 ssh_runner.go:195] Run: openssl version
	I0916 11:18:24.852741   42145 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 11:18:24.852815   42145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:18:24.863966   42145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:18:24.868527   42145 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:18:24.868559   42145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:18:24.868608   42145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:18:24.874472   42145 command_runner.go:130] > 3ec20f2e
	I0916 11:18:24.874550   42145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:18:24.884026   42145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:18:24.895248   42145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:18:24.900151   42145 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:18:24.900182   42145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:18:24.900230   42145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:18:24.906028   42145 command_runner.go:130] > b5213941
	I0916 11:18:24.906086   42145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:18:24.915937   42145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:18:24.926598   42145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:18:24.930989   42145 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:18:24.931018   42145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:18:24.931065   42145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:18:24.936889   42145 command_runner.go:130] > 51391683
	I0916 11:18:24.937105   42145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:18:24.946763   42145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:18:24.951350   42145 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:18:24.951385   42145 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 11:18:24.951394   42145 command_runner.go:130] > Device: 253,1	Inode: 2101800     Links: 1
	I0916 11:18:24.951404   42145 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:18:24.951416   42145 command_runner.go:130] > Access: 2024-09-16 11:12:27.988737865 +0000
	I0916 11:18:24.951431   42145 command_runner.go:130] > Modify: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:18:24.951439   42145 command_runner.go:130] > Change: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:18:24.951444   42145 command_runner.go:130] >  Birth: 2024-09-16 11:05:45.018725064 +0000
	I0916 11:18:24.951500   42145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:18:24.957201   42145 command_runner.go:130] > Certificate will not expire
	I0916 11:18:24.957442   42145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:18:24.963294   42145 command_runner.go:130] > Certificate will not expire
	I0916 11:18:24.963368   42145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:18:24.968977   42145 command_runner.go:130] > Certificate will not expire
	I0916 11:18:24.969197   42145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:18:24.974847   42145 command_runner.go:130] > Certificate will not expire
	I0916 11:18:24.974908   42145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:18:24.980442   42145 command_runner.go:130] > Certificate will not expire
	I0916 11:18:24.980509   42145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:18:24.986162   42145 command_runner.go:130] > Certificate will not expire
	I0916 11:18:24.986232   42145 kubeadm.go:392] StartCluster: {Name:multinode-736061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-736061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:18:24.986363   42145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:18:24.986401   42145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:18:25.023401   42145 command_runner.go:130] > 34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25
	I0916 11:18:25.023428   42145 command_runner.go:130] > 35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36
	I0916 11:18:25.023434   42145 command_runner.go:130] > 87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309
	I0916 11:18:25.023442   42145 command_runner.go:130] > 2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d
	I0916 11:18:25.023450   42145 command_runner.go:130] > 2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d
	I0916 11:18:25.023460   42145 command_runner.go:130] > ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526
	I0916 11:18:25.023469   42145 command_runner.go:130] > 8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d
	I0916 11:18:25.023483   42145 command_runner.go:130] > 126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4
	I0916 11:18:25.023492   42145 command_runner.go:130] > 840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd
	I0916 11:18:25.023498   42145 command_runner.go:130] > 02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198
	I0916 11:18:25.023503   42145 command_runner.go:130] > 7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0
	I0916 11:18:25.023511   42145 command_runner.go:130] > f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee
	I0916 11:18:25.023517   42145 command_runner.go:130] > b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762
	I0916 11:18:25.023524   42145 command_runner.go:130] > 769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24
	I0916 11:18:25.023532   42145 command_runner.go:130] > d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba
	I0916 11:18:25.023547   42145 command_runner.go:130] > ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7
	I0916 11:18:25.024773   42145 cri.go:89] found id: "34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25"
	I0916 11:18:25.024792   42145 cri.go:89] found id: "35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36"
	I0916 11:18:25.024798   42145 cri.go:89] found id: "87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309"
	I0916 11:18:25.024803   42145 cri.go:89] found id: "2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d"
	I0916 11:18:25.024807   42145 cri.go:89] found id: "2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d"
	I0916 11:18:25.024811   42145 cri.go:89] found id: "ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526"
	I0916 11:18:25.024815   42145 cri.go:89] found id: "8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d"
	I0916 11:18:25.024833   42145 cri.go:89] found id: "126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4"
	I0916 11:18:25.024841   42145 cri.go:89] found id: "840a587a0926e82c37a05beb15b8a2038534eaa736ce1cfed1c0eecaa13a5cdd"
	I0916 11:18:25.024848   42145 cri.go:89] found id: "02223ab1824986d0b8986be277df48e11a9e023715892ea637bf405f10af7198"
	I0916 11:18:25.024852   42145 cri.go:89] found id: "7a89ff755837adc838b2681df4fdb3f0cd2733af338156c540403f460fc59bc0"
	I0916 11:18:25.024858   42145 cri.go:89] found id: "f8c55edbe2173ff25f684104100b9d6d5b8d80563dadb06b4c24bb06b2bf68ee"
	I0916 11:18:25.024865   42145 cri.go:89] found id: "b76d5d4ad419a79a8aa910e3cea3029c12e6a90c7750317c5fbe1c40b43aa762"
	I0916 11:18:25.024870   42145 cri.go:89] found id: "769a75ad1934a20389b16c528015b4fd90be4bf92209ba2509e2cdf2e57e2a24"
	I0916 11:18:25.024879   42145 cri.go:89] found id: "d53f9aec7bc35845a9064135b7cf6154a190d2b15edab056cfd8296575fb25ba"
	I0916 11:18:25.024886   42145 cri.go:89] found id: "ed73e9089f633c6b3f26009f325ee6cfdd62ef60de41cbb639ec4aa8b02b84a7"
	I0916 11:18:25.024890   42145 cri.go:89] found id: ""
	I0916 11:18:25.024942   42145 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.806807153Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5be94e60f7930bba0fd6d9e886685711dc9004921cf6308d416d07bb17f5360f,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-g9fqk,Uid:0dd08783-fcfd-441f-8bda-c82c0c15173e,Namespace:default,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485549859503753,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:18:35.741996065Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f3ceb09ac644901b4d083848ced222f6dfa4d27b77a81125d64b8ed4c116516,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nlhl2,Uid:6ea84b9d-f364-4e26-8dc8-44c3b4d92417,Namespace:kube-system,Attempt:2,}
,State:SANDBOX_READY,CreatedAt:1726485516156480605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:18:35.741997261Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff590bb2a3b37271940cc4c8793c46b44350f3a44777bc2ab2c405101f288e20,Metadata:&PodSandboxMetadata{Name:kindnet-qb4tq,Uid:933f0749-7868-4e96-9b8e-67005545bbc5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485516149186023,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-09-16T11:18:35.741988170Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d416e868d009b10a1f35b7b9cb66072d56bb14958d646614040df59e5aa543d,Metadata:&PodSandboxMetadata{Name:kube-proxy-ftj9p,Uid:fa72720f-1c4a-46a2-a733-f411ccb6f628,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485516103803583,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T11:18:35.741991637Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35d11f6c17072ff7fe22bed40def8d205ff79030fb9bb3145e238f13d2e6df59,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,Namespace:kube-system,Attempt:2,},State
:SANDBOX_READY,CreatedAt:1726485516072088494,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T11:18:35.741994591Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2a74e7303bd244240fed1279559a418c10a6e3fde681f551cf25f492765e171,Metadata:&PodSandboxMetadata{Name:etcd-multinode-736061,Uid:69d3e8c6e76d0bc1af3482326f7904d1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485510671164688,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.32:2379,kubernetes.io/config.hash: 69d3e8c6e76d0bc1af3482326f7904d1,kubernetes.io/config.seen: 2024-09-16T11:18:26.729247029Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b691d918fe31853bb64f6695e23d6b467b57df459043f24bb0eedb8862dd8e85,Metadat
a:&PodSandboxMetadata{Name:kube-apiserver-multinode-736061,Uid:efede0e1597c8cbe70740f3169f7ec4a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485510664794828,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.32:8443,kubernetes.io/config.hash: efede0e1597c8cbe70740f3169f7ec4a,kubernetes.io/config.seen: 2024-09-16T11:18:26.729248316Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:707473be2530c77bf5ebbe0e2bf46f1e5dfee8a0db6273c7570992fff99da933,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-736061,Uid:94d3338940ee73a61a5075650d027904,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485507241952636,Labels:map[string]stri
ng{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94d3338940ee73a61a5075650d027904,kubernetes.io/config.seen: 2024-09-16T11:18:26.729249247Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17bb7fb201be02d4c41ea98b21d6d7b68118aa4644cf5bec6edce1796990a6b7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-736061,Uid:de66983060c1e167c6b9498eb8b0a025,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1726485507234981023,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,tier: control-plane,},Annotations:map[string]string{kubernete
s.io/config.hash: de66983060c1e167c6b9498eb8b0a025,kubernetes.io/config.seen: 2024-09-16T11:18:26.729242864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f6990a04-01b3-463c-93ba-806a6dc3ba84 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.807663281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ea45b36-337c-4c42-99da-08b3b3395887 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.807718560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ea45b36-337c-4c42-99da-08b3b3395887 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.807917679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2cdf12e67321cce0451b53b340d9c71d2b5c8b8f62f5e285f5aa34f465a44c99,PodSandboxId:5be94e60f7930bba0fd6d9e886685711dc9004921cf6308d416d07bb17f5360f,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485550024236162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2931c88ee2def7e56d74369a17f2838ae42c00c4618b88050c1774e1a506ffa8,PodSandboxId:8d416e868d009b10a1f35b7b9cb66072d56bb14958d646614040df59e5aa543d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485529833421944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0565afcfc6439b82b4dbc53977f8b65ede76a55460306df544cecd2af14280,PodSandboxId:35d11f6c17072ff7fe22bed40def8d205ff79030fb9bb3145e238f13d2e6df59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485529828450802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc18fbbe7c937039880846a8bd4027af0b410236af758bf11b2c74b973cfdbdc,PodSandboxId:ff590bb2a3b37271940cc4c8793c46b44350f3a44777bc2ab2c405101f288e20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485528822803307,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a53712ea1e790c73d4130c05b081e7e0de7f1cb16132b9059bc50b54dd3a12,PodSandboxId:2f3ceb09ac644901b4d083848ced222f6dfa4d27b77a81125d64b8ed4c116516,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485519743218822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4dde072277c681e084753b1ab9f94d7cff5bd259160ee994d4cf80f471c0bcc,PodSandboxId:b691d918fe31853bb64f6695e23d6b467b57df459043f24bb0eedb8862dd8e85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485510806649428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafbbad78159493e0125071f315b0c74e4ccc767357fd1ef2d53a04310e7d806,PodSandboxId:b2a74e7303bd244240fed1279559a418c10a6e3fde681f551cf25f492765e171,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485510774160434,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a25b186adfebd82641fe66adcb92d0d96405961027d13834e75c2b076d4485,PodSandboxId:17bb7fb201be02d4c41ea98b21d6d7b68118aa4644cf5bec6edce1796990a6b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485507388561801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998aff3229d3d76fd844ddff79d4a467ad878de2d447e097ca762b88900ca1e9,PodSandboxId:707473be2530c77bf5ebbe0e2bf46f1e5dfee8a0db6273c7570992fff99da933,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485507339804984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ea45b36-337c-4c42-99da-08b3b3395887 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.846225484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02ba2235-73d9-4dd8-b188-b7195ea83af5 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.846363065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02ba2235-73d9-4dd8-b188-b7195ea83af5 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.847809414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c5eb5a9-334b-4d50-a7a0-4017391ce031 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.848183207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485582848159621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c5eb5a9-334b-4d50-a7a0-4017391ce031 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.848738469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31be59f6-71f3-4bcd-a53a-1912142494a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.848791080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31be59f6-71f3-4bcd-a53a-1912142494a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.849152758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2cdf12e67321cce0451b53b340d9c71d2b5c8b8f62f5e285f5aa34f465a44c99,PodSandboxId:5be94e60f7930bba0fd6d9e886685711dc9004921cf6308d416d07bb17f5360f,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485550024236162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2931c88ee2def7e56d74369a17f2838ae42c00c4618b88050c1774e1a506ffa8,PodSandboxId:8d416e868d009b10a1f35b7b9cb66072d56bb14958d646614040df59e5aa543d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485529833421944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0565afcfc6439b82b4dbc53977f8b65ede76a55460306df544cecd2af14280,PodSandboxId:35d11f6c17072ff7fe22bed40def8d205ff79030fb9bb3145e238f13d2e6df59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485529828450802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc18fbbe7c937039880846a8bd4027af0b410236af758bf11b2c74b973cfdbdc,PodSandboxId:ff590bb2a3b37271940cc4c8793c46b44350f3a44777bc2ab2c405101f288e20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485528822803307,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a53712ea1e790c73d4130c05b081e7e0de7f1cb16132b9059bc50b54dd3a12,PodSandboxId:2f3ceb09ac644901b4d083848ced222f6dfa4d27b77a81125d64b8ed4c116516,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485519743218822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4dde072277c681e084753b1ab9f94d7cff5bd259160ee994d4cf80f471c0bcc,PodSandboxId:b691d918fe31853bb64f6695e23d6b467b57df459043f24bb0eedb8862dd8e85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485510806649428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafbbad78159493e0125071f315b0c74e4ccc767357fd1ef2d53a04310e7d806,PodSandboxId:b2a74e7303bd244240fed1279559a418c10a6e3fde681f551cf25f492765e171,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485510774160434,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a25b186adfebd82641fe66adcb92d0d96405961027d13834e75c2b076d4485,PodSandboxId:17bb7fb201be02d4c41ea98b21d6d7b68118aa4644cf5bec6edce1796990a6b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485507388561801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998aff3229d3d76fd844ddff79d4a467ad878de2d447e097ca762b88900ca1e9,PodSandboxId:707473be2530c77bf5ebbe0e2bf46f1e5dfee8a0db6273c7570992fff99da933,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485507339804984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726485188158464126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726485154742813492,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726485154656632038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPor
t\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726485154505982676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-
b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726485154436691263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726485150640626557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726485150609007970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726485150554539861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726485150539111389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31be59f6-71f3-4bcd-a53a-1912142494a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.893421532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5abb9766-15b6-4219-a03e-212db74354e0 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.893495216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5abb9766-15b6-4219-a03e-212db74354e0 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.894819684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=329fb729-2ad1-45b4-a8a9-cc4f48d6c138 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.895358909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485582895265101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=329fb729-2ad1-45b4-a8a9-cc4f48d6c138 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.896020049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efdc5a14-fe9f-4649-9b44-8545661a2f8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.896075565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efdc5a14-fe9f-4649-9b44-8545661a2f8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.896609413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2cdf12e67321cce0451b53b340d9c71d2b5c8b8f62f5e285f5aa34f465a44c99,PodSandboxId:5be94e60f7930bba0fd6d9e886685711dc9004921cf6308d416d07bb17f5360f,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485550024236162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2931c88ee2def7e56d74369a17f2838ae42c00c4618b88050c1774e1a506ffa8,PodSandboxId:8d416e868d009b10a1f35b7b9cb66072d56bb14958d646614040df59e5aa543d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485529833421944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0565afcfc6439b82b4dbc53977f8b65ede76a55460306df544cecd2af14280,PodSandboxId:35d11f6c17072ff7fe22bed40def8d205ff79030fb9bb3145e238f13d2e6df59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485529828450802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc18fbbe7c937039880846a8bd4027af0b410236af758bf11b2c74b973cfdbdc,PodSandboxId:ff590bb2a3b37271940cc4c8793c46b44350f3a44777bc2ab2c405101f288e20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485528822803307,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a53712ea1e790c73d4130c05b081e7e0de7f1cb16132b9059bc50b54dd3a12,PodSandboxId:2f3ceb09ac644901b4d083848ced222f6dfa4d27b77a81125d64b8ed4c116516,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485519743218822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4dde072277c681e084753b1ab9f94d7cff5bd259160ee994d4cf80f471c0bcc,PodSandboxId:b691d918fe31853bb64f6695e23d6b467b57df459043f24bb0eedb8862dd8e85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485510806649428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafbbad78159493e0125071f315b0c74e4ccc767357fd1ef2d53a04310e7d806,PodSandboxId:b2a74e7303bd244240fed1279559a418c10a6e3fde681f551cf25f492765e171,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485510774160434,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a25b186adfebd82641fe66adcb92d0d96405961027d13834e75c2b076d4485,PodSandboxId:17bb7fb201be02d4c41ea98b21d6d7b68118aa4644cf5bec6edce1796990a6b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485507388561801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998aff3229d3d76fd844ddff79d4a467ad878de2d447e097ca762b88900ca1e9,PodSandboxId:707473be2530c77bf5ebbe0e2bf46f1e5dfee8a0db6273c7570992fff99da933,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485507339804984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726485188158464126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726485154742813492,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726485154656632038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPor
t\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726485154505982676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-
b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726485154436691263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726485150640626557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726485150609007970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726485150554539861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726485150539111389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efdc5a14-fe9f-4649-9b44-8545661a2f8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.938605907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71a39d90-fda6-4488-ad3a-4b7fce2b312f name=/runtime.v1.RuntimeService/Version
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.938694835Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71a39d90-fda6-4488-ad3a-4b7fce2b312f name=/runtime.v1.RuntimeService/Version
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.939664274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51f4720a-044d-4cb0-8220-c861ac7dffa1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.940065894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485582940042621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51f4720a-044d-4cb0-8220-c861ac7dffa1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.940752151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d07d1e6-323a-4009-bde6-3c9337a663df name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.940822341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d07d1e6-323a-4009-bde6-3c9337a663df name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:19:42 multinode-736061 crio[5298]: time="2024-09-16 11:19:42.941158984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2cdf12e67321cce0451b53b340d9c71d2b5c8b8f62f5e285f5aa34f465a44c99,PodSandboxId:5be94e60f7930bba0fd6d9e886685711dc9004921cf6308d416d07bb17f5360f,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726485550024236162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2931c88ee2def7e56d74369a17f2838ae42c00c4618b88050c1774e1a506ffa8,PodSandboxId:8d416e868d009b10a1f35b7b9cb66072d56bb14958d646614040df59e5aa543d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726485529833421944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0565afcfc6439b82b4dbc53977f8b65ede76a55460306df544cecd2af14280,PodSandboxId:35d11f6c17072ff7fe22bed40def8d205ff79030fb9bb3145e238f13d2e6df59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726485529828450802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc18fbbe7c937039880846a8bd4027af0b410236af758bf11b2c74b973cfdbdc,PodSandboxId:ff590bb2a3b37271940cc4c8793c46b44350f3a44777bc2ab2c405101f288e20,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726485528822803307,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a53712ea1e790c73d4130c05b081e7e0de7f1cb16132b9059bc50b54dd3a12,PodSandboxId:2f3ceb09ac644901b4d083848ced222f6dfa4d27b77a81125d64b8ed4c116516,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726485519743218822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4dde072277c681e084753b1ab9f94d7cff5bd259160ee994d4cf80f471c0bcc,PodSandboxId:b691d918fe31853bb64f6695e23d6b467b57df459043f24bb0eedb8862dd8e85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726485510806649428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafbbad78159493e0125071f315b0c74e4ccc767357fd1ef2d53a04310e7d806,PodSandboxId:b2a74e7303bd244240fed1279559a418c10a6e3fde681f551cf25f492765e171,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726485510774160434,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48a25b186adfebd82641fe66adcb92d0d96405961027d13834e75c2b076d4485,PodSandboxId:17bb7fb201be02d4c41ea98b21d6d7b68118aa4644cf5bec6edce1796990a6b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726485507388561801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998aff3229d3d76fd844ddff79d4a467ad878de2d447e097ca762b88900ca1e9,PodSandboxId:707473be2530c77bf5ebbe0e2bf46f1e5dfee8a0db6273c7570992fff99da933,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726485507339804984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522d3b85a45483f18f7f58bd8b3e8d13be8e39fc300735f74f42f329fc76b41c,PodSandboxId:c27596adc976965a1472bc763ada489b272e6b61d2f34ac0643f3897280bfd19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726485188158464126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-g9fqk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0dd08783-fcfd-441f-8bda-c82c0c15173e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25,PodSandboxId:d6609b6804e2135dcae87f33c158bf648f918c21cfda5ee80259df9d474184ae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726485154742813492,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qb4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933f0749-7868-4e96-9b8e-67005545bbc5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36,PodSandboxId:78066c652dd8faaf3adaf02ffc316c8c72ab996c03a8aa8fe4284f1c3783c373,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726485154656632038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nlhl2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea84b9d-f364-4e26-8dc8-44c3b4d92417,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPor
t\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309,PodSandboxId:b06a4343bbdd3a9bb94de977a2f0df301ae567da38dde74613b8372b9badaa2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726485154505982676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e515ac2-bcf5-4a0f-a3af-
b4ee4e03a534,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d,PodSandboxId:fcfacdd69a46c5dab45cd0a3aa1fa9de4c957a839c412471e98012ee586cb38c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726485154436691263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ftj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72720f-1c4a-46a2-a733-f411ccb6f628,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d,PodSandboxId:d9afb21537018b32c40d57ef0e3594c924f50d2bf3259377c7789677aadd1cc2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726485150640626557,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de66983060c1e167c6b9498eb8b0a025,},Annotations:map[string]string{io.k
ubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526,PodSandboxId:cd4168d0828d2e6cf60f35609bcafa8f36efd5728470fe9b5a5927f930be6967,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726485150609007970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d3e8c6e76d0bc1af3482326f7904d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d,PodSandboxId:f4286a53710f269af6ca4f92b1992be516f9dedc66f945f30e7b0b95cdb2b85e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726485150554539861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efede0e1597c8cbe70740f3169f7ec4a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4,PodSandboxId:113acd43d732ed38df48b77f813934192a2a1cfab0f438e8905c93feade283e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726485150539111389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-736061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d3338940ee73a61a5075650d027904,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d07d1e6-323a-4009-bde6-3c9337a663df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2cdf12e67321c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   33 seconds ago       Running             busybox                   2                   5be94e60f7930       busybox-7dff88458-g9fqk
	2931c88ee2def       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   53 seconds ago       Running             kube-proxy                2                   8d416e868d009       kube-proxy-ftj9p
	5f0565afcfc64       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   53 seconds ago       Running             storage-provisioner       2                   35d11f6c17072       storage-provisioner
	dc18fbbe7c937       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   54 seconds ago       Running             kindnet-cni               2                   ff590bb2a3b37       kindnet-qb4tq
	45a53712ea1e7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Running             coredns                   2                   2f3ceb09ac644       coredns-7c65d6cfc9-nlhl2
	a4dde072277c6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   About a minute ago   Running             kube-apiserver            2                   b691d918fe318       kube-apiserver-multinode-736061
	aafbbad781594       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   b2a74e7303bd2       etcd-multinode-736061
	48a25b186adfe       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Running             kube-scheduler            2                   17bb7fb201be0       kube-scheduler-multinode-736061
	998aff3229d3d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   About a minute ago   Running             kube-controller-manager   2                   707473be2530c       kube-controller-manager-multinode-736061
	522d3b85a4548       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago        Exited              busybox                   1                   c27596adc9769       busybox-7dff88458-g9fqk
	34160c655e5ab       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   7 minutes ago        Exited              kindnet-cni               1                   d6609b6804e21       kindnet-qb4tq
	35a7839cd57d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 minutes ago        Exited              coredns                   1                   78066c652dd8f       coredns-7c65d6cfc9-nlhl2
	87a99d0015cbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 minutes ago        Exited              storage-provisioner       1                   b06a4343bbdd3       storage-provisioner
	2d81e17eebccf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   7 minutes ago        Exited              kube-proxy                1                   fcfacdd69a46c       kube-proxy-ftj9p
	2e7284c90c8c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 minutes ago        Exited              kube-scheduler            1                   d9afb21537018       kube-scheduler-multinode-736061
	ae1251600e6e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 minutes ago        Exited              etcd                      1                   cd4168d0828d2       etcd-multinode-736061
	8fa850b5495ff       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 minutes ago        Exited              kube-apiserver            1                   f4286a53710f2       kube-apiserver-multinode-736061
	126fd7058d64d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 minutes ago        Exited              kube-controller-manager   1                   113acd43d732e       kube-controller-manager-multinode-736061
	
	
	==> coredns [35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40656 - 6477 "HINFO IN 2586289926805624417.1154026984614338138. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028767921s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [45a53712ea1e790c73d4130c05b081e7e0de7f1cb16132b9059bc50b54dd3a12] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59256 - 58593 "HINFO IN 3422322808340341011.3529124524906442024. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029307577s
	
	
	==> describe nodes <==
	Name:               multinode-736061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_05_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:05:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:19:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:18:34 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:18:34 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:18:34 +0000   Mon, 16 Sep 2024 11:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:18:34 +0000   Mon, 16 Sep 2024 11:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-736061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60fe80618d4f42e281d4c50393e9d89e
	  System UUID:                60fe8061-8d4f-42e2-81d4-c50393e9d89e
	  Boot ID:                    d046d280-229f-4e9a-8a6c-1986374da911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-g9fqk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-nlhl2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-multinode-736061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-qb4tq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-multinode-736061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-multinode-736061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ftj9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-multinode-736061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 7m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-736061 status is now: NodeReady
	  Normal  Starting                 7m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s (x8 over 7m14s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s (x8 over 7m14s)  kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s (x7 over 7m14s)  kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m7s                   node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet          Node multinode-736061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)      kubelet          Node multinode-736061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)      kubelet          Node multinode-736061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                    node-controller  Node multinode-736061 event: Registered Node multinode-736061 in Controller
	
	
	Name:               multinode-736061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-736061-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-736061
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T11_19_23_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:19:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-736061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:19:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:19:40 +0000   Mon, 16 Sep 2024 11:19:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:19:40 +0000   Mon, 16 Sep 2024 11:19:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:19:40 +0000   Mon, 16 Sep 2024 11:19:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:19:40 +0000   Mon, 16 Sep 2024 11:19:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-736061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fe337504134150bccd557919449b29
	  System UUID:                d4fe3375-0413-4150-bccd-557919449b29
	  Boot ID:                    32bf2274-592b-42b2-a616-75770e2038e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-plr2p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kindnet-xlrxb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-8h6jp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                    kubelet          Node multinode-736061-m02 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    6m33s (x2 over 6m33s)  kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x2 over 6m33s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s (x2 over 6m33s)  kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m15s                  kubelet          Node multinode-736061-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet          Node multinode-736061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet          Node multinode-736061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                    node-controller  Node multinode-736061-m02 event: Registered Node multinode-736061-m02 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-736061-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 11:07] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 11:12] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +0.148062] systemd-fstab-generator[2925]: Ignoring "noauto" option for root device
	[  +0.171344] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +0.138643] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.279343] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +0.718595] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +2.178122] systemd-fstab-generator[3193]: Ignoring "noauto" option for root device
	[  +4.699068] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.680556] systemd-fstab-generator[4044]: Ignoring "noauto" option for root device
	[  +0.106179] kauditd_printk_skb: 34 callbacks suppressed
	[Sep16 11:13] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 11:18] systemd-fstab-generator[5219]: Ignoring "noauto" option for root device
	[  +0.143576] systemd-fstab-generator[5231]: Ignoring "noauto" option for root device
	[  +0.160223] systemd-fstab-generator[5245]: Ignoring "noauto" option for root device
	[  +0.145872] systemd-fstab-generator[5257]: Ignoring "noauto" option for root device
	[  +0.285029] systemd-fstab-generator[5285]: Ignoring "noauto" option for root device
	[ +11.585515] systemd-fstab-generator[5394]: Ignoring "noauto" option for root device
	[  +0.079359] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.044123] systemd-fstab-generator[5517]: Ignoring "noauto" option for root device
	[  +4.203095] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.587022] kauditd_printk_skb: 10 callbacks suppressed
	[ +12.568786] kauditd_printk_skb: 15 callbacks suppressed
	[  +2.161318] systemd-fstab-generator[6435]: Ignoring "noauto" option for root device
	[Sep16 11:19] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [aafbbad78159493e0125071f315b0c74e4ccc767357fd1ef2d53a04310e7d806] <==
	{"level":"info","ts":"2024-09-16T11:18:30.984408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 switched to configuration voters=(15330347993288500617)"}
	{"level":"info","ts":"2024-09-16T11:18:30.984526Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-09-16T11:18:30.984647Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:18:30.984698Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:18:30.986462Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:18:30.986694Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:18:30.986733Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:18:30.986867Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:18:30.986890Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:18:32.458971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T11:18:32.459030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:18:32.459076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:18:32.459096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T11:18:32.459101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 4"}
	{"level":"info","ts":"2024-09-16T11:18:32.459111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T11:18:32.459118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 4"}
	{"level":"info","ts":"2024-09-16T11:18:32.465343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:18:32.465265Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-736061 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:18:32.466525Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:18:32.466717Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:18:32.467108Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:18:32.467156Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:18:32.467510Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:18:32.467873Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:18:32.468877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	
	
	==> etcd [ae1251600e6e87614f76ce47d26bb537d329e9c3726f16afd53b5917cabf6526] <==
	{"level":"info","ts":"2024-09-16T11:12:32.130461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:32.130501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.130532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:32.136512Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-736061 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:32.136525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.136756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:32.137155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137197Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.137926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:32.138897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-09-16T11:12:32.139181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:16:36.970990Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T11:16:36.971189Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	{"level":"warn","ts":"2024-09-16T11:16:36.973667Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:16:37.006452Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:16:37.063712Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T11:16:37.063823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T11:16:37.063944Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4c05646b7156589","current-leader-member-id":"d4c05646b7156589"}
	{"level":"info","ts":"2024-09-16T11:16:37.067438Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:16:37.067554Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-09-16T11:16:37.067564Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-736061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	
	
	==> kernel <==
	 11:19:43 up 14 min,  0 users,  load average: 0.16, 0.23, 0.19
	Linux multinode-736061 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25] <==
	I0916 11:15:35.682705       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:15:45.681812       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:15:45.682040       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:15:45.682467       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:15:45.682508       1 main.go:299] handling current node
	I0916 11:15:55.685120       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:15:55.685391       1 main.go:299] handling current node
	I0916 11:15:55.685470       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:15:55.685513       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:05.685267       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:05.685392       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:05.685550       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:05.685583       1 main.go:299] handling current node
	I0916 11:16:15.690122       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:15.690152       1 main.go:299] handling current node
	I0916 11:16:15.690165       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:15.690169       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:25.689644       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:25.689765       1 main.go:299] handling current node
	I0916 11:16:25.689793       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:25.689811       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:16:35.682209       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:16:35.682266       1 main.go:299] handling current node
	I0916 11:16:35.682334       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:16:35.682346       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [dc18fbbe7c937039880846a8bd4027af0b410236af758bf11b2c74b973cfdbdc] <==
	I0916 11:18:49.173938       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:18:49.680849       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 11:18:59.690445       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:18:59.690533       1 main.go:299] handling current node
	I0916 11:18:59.690919       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:18:59.690948       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:19:09.686883       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:19:09.687062       1 main.go:299] handling current node
	I0916 11:19:09.687104       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:19:09.687124       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:19:19.682026       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:19:19.682082       1 main.go:299] handling current node
	I0916 11:19:19.682102       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:19:19.682111       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:19:29.683410       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:19:29.683460       1 main.go:299] handling current node
	I0916 11:19:29.683488       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:19:29.683496       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	I0916 11:19:39.681228       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0916 11:19:39.681524       1 main.go:299] handling current node
	I0916 11:19:39.681592       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0916 11:19:39.681617       1 main.go:322] Node multinode-736061-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8fa850b5495ffd4ec6303d58bd055944e4fd2d6bd860b6b70273e719497bdf2d] <==
	I0916 11:12:33.508959       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:12:33.509043       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:12:33.509776       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:33.509828       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:33.509857       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:33.546526       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:12:33.568509       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:33.568599       1 policy_source.go:224] refreshing policies
	I0916 11:12:33.589155       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:12:33.590889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:12:33.590927       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:12:33.591376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:12:33.596733       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:12:33.620595       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:33.621748       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:12:34.423228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:35.891543       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:12:36.022725       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:36.049167       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:36.129506       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:36.139653       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:37.024276       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:37.124173       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:16:36.994933       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0916 11:16:36.996538       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a4dde072277c681e084753b1ab9f94d7cff5bd259160ee994d4cf80f471c0bcc] <==
	I0916 11:18:33.983477       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:18:33.983631       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:18:33.983668       1 policy_source.go:224] refreshing policies
	I0916 11:18:33.983779       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:18:33.983820       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:18:33.984146       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:18:33.984183       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:18:33.988968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:18:33.989660       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:18:33.989866       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:18:33.989908       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:18:33.989932       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:18:33.989953       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:18:33.989997       1 cache.go:39] Caches are synced for autoregister controller
	E0916 11:18:33.991340       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 11:18:33.993893       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:18:33.998607       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:18:34.786655       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:18:35.924832       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:18:36.169023       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:18:36.183656       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:18:36.305042       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:18:36.316885       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:18:37.103541       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:18:37.155807       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [126fd7058d64d004c8ecb83905bc9e04f7dcd51ba92c467d2ec0175a0aedb8d4] <==
	E0916 11:13:47.943787       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-736061-m03"
	E0916 11:13:47.943838       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-736061-m03': failed to patch node CIDR: Node \"multinode-736061-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 11:13:47.943877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.949840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:47.952982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:48.292993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:51.924112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:13:58.208795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.228610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:06.246940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:06.870268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:10.875842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:10.892575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:11.443344       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:14:11.443755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m03"
	I0916 11:14:51.890757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:14:51.912413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:14:51.920581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.988064ms"
	I0916 11:14:51.920660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.285µs"
	I0916 11:14:57.052188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:15:16.791204       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bvqrg"
	I0916 11:15:16.816034       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bvqrg"
	I0916 11:15:16.816158       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5hctk"
	I0916 11:15:16.838568       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5hctk"
	
	
	==> kube-controller-manager [998aff3229d3d76fd844ddff79d4a467ad878de2d447e097ca762b88900ca1e9] <==
	I0916 11:19:21.277353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:22.428528       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-736061-m02\" does not exist"
	I0916 11:19:22.438006       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-736061-m02" podCIDRs=["10.244.1.0/24"]
	I0916 11:19:22.438866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:22.440999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:22.454805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:22.808598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:23.155528       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:23.265087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.199µs"
	I0916 11:19:23.320741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.055µs"
	I0916 11:19:23.330832       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.865µs"
	I0916 11:19:23.333911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.449µs"
	I0916 11:19:23.341079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.061µs"
	I0916 11:19:23.343942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.919µs"
	I0916 11:19:25.944395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.881µs"
	I0916 11:19:26.763980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:32.482923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:40.143174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:40.143274       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-736061-m02"
	I0916 11:19:40.154974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	I0916 11:19:40.160588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.371µs"
	I0916 11:19:40.178381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="104.451µs"
	I0916 11:19:41.466351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.649877ms"
	I0916 11:19:41.466447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.132µs"
	I0916 11:19:41.762683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-736061-m02"
	
	
	==> kube-proxy [2931c88ee2def7e56d74369a17f2838ae42c00c4618b88050c1774e1a506ffa8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:18:50.093180       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:18:50.102150       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:18:50.102238       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:18:50.139169       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:18:50.139363       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:18:50.139391       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:18:50.141947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:18:50.142253       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:18:50.142349       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:18:50.143488       1 config.go:199] "Starting service config controller"
	I0916 11:18:50.143637       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:18:50.143530       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:18:50.143815       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:18:50.144060       1 config.go:328] "Starting node config controller"
	I0916 11:18:50.144085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:18:50.243747       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:18:50.243862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:18:50.244170       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 11:12:34.892799       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 11:12:34.920138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	E0916 11:12:34.920279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:12:34.987651       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 11:12:34.987713       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 11:12:34.987739       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:12:34.996924       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:12:34.997221       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:12:34.997234       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:35.007220       1 config.go:199] "Starting service config controller"
	I0916 11:12:35.029098       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:12:35.025409       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:12:35.029156       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:12:35.029162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:12:35.026457       1 config.go:328] "Starting node config controller"
	I0916 11:12:35.029234       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:12:35.130341       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:12:35.130407       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2e7284c90c8c712246e8ddc247d88f7825f37f3e3c689acda276e3052c41850d] <==
	I0916 11:12:31.748594       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:12:33.440575       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:12:33.440623       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:12:33.440633       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:12:33.440641       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:12:33.526991       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:12:33.527040       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:33.536502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:12:33.536670       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:12:33.540976       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:12:33.544844       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:12:33.638485       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 11:16:36.970096       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [48a25b186adfebd82641fe66adcb92d0d96405961027d13834e75c2b076d4485] <==
	W0916 11:18:33.902870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:18:33.902903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.902955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:18:33.902964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:18:33.903097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:18:33.903200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:18:33.903398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:18:33.903548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:18:33.903621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:18:33.903694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:18:33.903766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:18:33.903877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:18:33.903953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:18:33.903966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:18:33.903973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:18:37.312870       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:18:36 multinode-736061 kubelet[5524]: E0916 11:18:36.864833    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485516864058820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:18:36 multinode-736061 kubelet[5524]: E0916 11:18:36.865140    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485516864058820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:18:39 multinode-736061 kubelet[5524]: I0916 11:18:39.732376    5524 scope.go:117] "RemoveContainer" containerID="35a7839cd57d08487c669b6c86350b4a2f5f3b241a502106911bebe57fc8af36"
	Sep 16 11:18:45 multinode-736061 kubelet[5524]: I0916 11:18:45.395137    5524 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 11:18:46 multinode-736061 kubelet[5524]: E0916 11:18:46.869140    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485526868113701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:18:46 multinode-736061 kubelet[5524]: E0916 11:18:46.869518    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485526868113701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:18:48 multinode-736061 kubelet[5524]: I0916 11:18:48.806558    5524 scope.go:117] "RemoveContainer" containerID="34160c655e5ab02a4a54bd049c7fbf32b5f1f714b64212932d9c2c16e2d4bf25"
	Sep 16 11:18:49 multinode-736061 kubelet[5524]: I0916 11:18:49.806015    5524 scope.go:117] "RemoveContainer" containerID="2d81e17eebccf4b84f735e1cafbb3e77e8e76abd8f4d62edf876aa5d55fd0b0d"
	Sep 16 11:18:49 multinode-736061 kubelet[5524]: I0916 11:18:49.807565    5524 scope.go:117] "RemoveContainer" containerID="87a99d0015cbca7c7d38632c9cf9a1a9a00ce88cbe573c612f87fc9ca64ae309"
	Sep 16 11:18:56 multinode-736061 kubelet[5524]: E0916 11:18:56.875753    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485536875456444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:18:56 multinode-736061 kubelet[5524]: E0916 11:18:56.875781    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485536875456444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:06 multinode-736061 kubelet[5524]: E0916 11:19:06.881276    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485546878498723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:06 multinode-736061 kubelet[5524]: E0916 11:19:06.881385    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485546878498723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:16 multinode-736061 kubelet[5524]: E0916 11:19:16.885549    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485556884170838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:16 multinode-736061 kubelet[5524]: E0916 11:19:16.886569    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485556884170838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:17 multinode-736061 kubelet[5524]: I0916 11:19:17.445529    5524 scope.go:117] "RemoveContainer" containerID="84517e6af45b4aaa0074acc2c9f529a29e3e476ef98b8a9b414655341b967c3b"
	Sep 16 11:19:26 multinode-736061 kubelet[5524]: E0916 11:19:26.861429    5524 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 11:19:26 multinode-736061 kubelet[5524]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 11:19:26 multinode-736061 kubelet[5524]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 11:19:26 multinode-736061 kubelet[5524]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 11:19:26 multinode-736061 kubelet[5524]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 11:19:26 multinode-736061 kubelet[5524]: E0916 11:19:26.887852    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485566887561030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:26 multinode-736061 kubelet[5524]: E0916 11:19:26.887902    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485566887561030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:36 multinode-736061 kubelet[5524]: E0916 11:19:36.889708    5524 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485576889205275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:19:36 multinode-736061 kubelet[5524]: E0916 11:19:36.890100    5524 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726485576889205275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:19:42.489883   43565 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19651-3851/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-736061 -n multinode-736061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (500.069µs)
helpers_test.go:263: kubectl --context multinode-736061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/RestartMultiNode (188.29s)

                                                
                                    
x
+
TestPreload (267.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-090952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0916 11:21:28.278497   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-090952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.587586569s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-090952 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-090952 image pull gcr.io/k8s-minikube/busybox: (1.134129531s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-090952
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-090952: exit status 82 (2m0.4598215s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-090952"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-090952 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-16 11:24:40.788984262 +0000 UTC m=+3795.012102713
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-090952 -n test-preload-090952
E0916 11:24:51.890227   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-090952 -n test-preload-090952: exit status 3 (18.664115181s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:24:59.449492   45454 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	E0916 11:24:59.449514   45454 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-090952" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-090952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-090952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-090952: (1.143903479s)
--- FAIL: TestPreload (267.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-045794 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-045794 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m19.396972488s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-045794] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-045794" primary control-plane node in "kubernetes-upgrade-045794" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:27:46.584084   48743 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:27:46.584206   48743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:27:46.584217   48743 out.go:358] Setting ErrFile to fd 2...
	I0916 11:27:46.584221   48743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:27:46.584398   48743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:27:46.584965   48743 out.go:352] Setting JSON to false
	I0916 11:27:46.585884   48743 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4217,"bootTime":1726481850,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:27:46.585981   48743 start.go:139] virtualization: kvm guest
	I0916 11:27:46.588208   48743 out.go:177] * [kubernetes-upgrade-045794] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:27:46.589894   48743 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:27:46.589967   48743 notify.go:220] Checking for updates...
	I0916 11:27:46.592285   48743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:27:46.593601   48743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:27:46.594866   48743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:27:46.596092   48743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:27:46.597235   48743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:27:46.598720   48743 config.go:182] Loaded profile config "NoKubernetes-668924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:27:46.598852   48743 config.go:182] Loaded profile config "offline-crio-650886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:27:46.598949   48743 config.go:182] Loaded profile config "running-upgrade-682717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0916 11:27:46.599047   48743 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:27:46.633069   48743 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 11:27:46.634237   48743 start.go:297] selected driver: kvm2
	I0916 11:27:46.634252   48743 start.go:901] validating driver "kvm2" against <nil>
	I0916 11:27:46.634266   48743 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:27:46.635247   48743 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:27:46.635349   48743 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:27:46.650016   48743 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:27:46.650073   48743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:27:46.650318   48743 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 11:27:46.650356   48743 cni.go:84] Creating CNI manager for ""
	I0916 11:27:46.650407   48743 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:27:46.650417   48743 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 11:27:46.650487   48743 start.go:340] cluster config:
	{Name:kubernetes-upgrade-045794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:27:46.650614   48743 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:27:46.652341   48743 out.go:177] * Starting "kubernetes-upgrade-045794" primary control-plane node in "kubernetes-upgrade-045794" cluster
	I0916 11:27:46.653464   48743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:27:46.653493   48743 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 11:27:46.653504   48743 cache.go:56] Caching tarball of preloaded images
	I0916 11:27:46.653579   48743 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:27:46.653593   48743 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 11:27:46.653691   48743 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/config.json ...
	I0916 11:27:46.653715   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/config.json: {Name:mk4fe2f12ce30c244057aa5eab227941c8ebf12a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:27:46.653862   48743 start.go:360] acquireMachinesLock for kubernetes-upgrade-045794: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:28:35.781741   48743 start.go:364] duration metric: took 49.127831911s to acquireMachinesLock for "kubernetes-upgrade-045794"
	I0916 11:28:35.781826   48743 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-045794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:28:35.781930   48743 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 11:28:35.784791   48743 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 11:28:35.784995   48743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:28:35.785048   48743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:28:35.805121   48743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
	I0916 11:28:35.805648   48743 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:28:35.806213   48743 main.go:141] libmachine: Using API Version  1
	I0916 11:28:35.806235   48743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:28:35.806617   48743 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:28:35.806837   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:28:35.806985   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:28:35.807127   48743 start.go:159] libmachine.API.Create for "kubernetes-upgrade-045794" (driver="kvm2")
	I0916 11:28:35.807158   48743 client.go:168] LocalClient.Create starting
	I0916 11:28:35.807195   48743 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 11:28:35.807236   48743 main.go:141] libmachine: Decoding PEM data...
	I0916 11:28:35.807263   48743 main.go:141] libmachine: Parsing certificate...
	I0916 11:28:35.807326   48743 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 11:28:35.807348   48743 main.go:141] libmachine: Decoding PEM data...
	I0916 11:28:35.807369   48743 main.go:141] libmachine: Parsing certificate...
	I0916 11:28:35.807391   48743 main.go:141] libmachine: Running pre-create checks...
	I0916 11:28:35.807407   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .PreCreateCheck
	I0916 11:28:35.807746   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetConfigRaw
	I0916 11:28:35.808202   48743 main.go:141] libmachine: Creating machine...
	I0916 11:28:35.808218   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .Create
	I0916 11:28:35.808373   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Creating KVM machine...
	I0916 11:28:35.809723   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found existing default KVM network
	I0916 11:28:35.811353   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:35.811209   49358 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:98:f5} reservation:<nil>}
	I0916 11:28:35.812155   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:35.812076   49358 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d4:08:5b} reservation:<nil>}
	I0916 11:28:35.812972   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:35.812896   49358 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:a6:f2} reservation:<nil>}
	I0916 11:28:35.814108   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:35.813999   49358 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003811e0}
	I0916 11:28:35.814137   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | created network xml: 
	I0916 11:28:35.814149   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | <network>
	I0916 11:28:35.814160   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |   <name>mk-kubernetes-upgrade-045794</name>
	I0916 11:28:35.814169   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |   <dns enable='no'/>
	I0916 11:28:35.814185   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |   
	I0916 11:28:35.814194   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0916 11:28:35.814205   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |     <dhcp>
	I0916 11:28:35.814214   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0916 11:28:35.814218   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |     </dhcp>
	I0916 11:28:35.814226   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |   </ip>
	I0916 11:28:35.814230   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG |   
	I0916 11:28:35.814237   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | </network>
	I0916 11:28:35.814241   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | 
	I0916 11:28:35.819684   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | trying to create private KVM network mk-kubernetes-upgrade-045794 192.168.72.0/24...
	I0916 11:28:35.891730   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | private KVM network mk-kubernetes-upgrade-045794 192.168.72.0/24 created
	I0916 11:28:35.891801   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:35.891686   49358 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:28:35.891845   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794 ...
	I0916 11:28:35.891869   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 11:28:35.891892   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 11:28:36.130796   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:36.130648   49358 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa...
	I0916 11:28:36.207785   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:36.207659   49358 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/kubernetes-upgrade-045794.rawdisk...
	I0916 11:28:36.207814   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Writing magic tar header
	I0916 11:28:36.207830   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Writing SSH key tar header
	I0916 11:28:36.207842   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:36.207775   49358 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794 ...
	I0916 11:28:36.207918   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794
	I0916 11:28:36.207943   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 11:28:36.207952   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794 (perms=drwx------)
	I0916 11:28:36.207961   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 11:28:36.207968   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 11:28:36.207975   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:28:36.207982   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 11:28:36.207990   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 11:28:36.207996   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home/jenkins
	I0916 11:28:36.208003   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Checking permissions on dir: /home
	I0916 11:28:36.208011   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Skipping /home - not owner
	I0916 11:28:36.208060   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 11:28:36.208081   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 11:28:36.208090   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 11:28:36.208098   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Creating domain...
	I0916 11:28:36.209056   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) define libvirt domain using xml: 
	I0916 11:28:36.209083   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) <domain type='kvm'>
	I0916 11:28:36.209093   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <name>kubernetes-upgrade-045794</name>
	I0916 11:28:36.209102   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <memory unit='MiB'>2200</memory>
	I0916 11:28:36.209109   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <vcpu>2</vcpu>
	I0916 11:28:36.209117   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <features>
	I0916 11:28:36.209151   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <acpi/>
	I0916 11:28:36.209165   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <apic/>
	I0916 11:28:36.209173   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <pae/>
	I0916 11:28:36.209180   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     
	I0916 11:28:36.209189   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   </features>
	I0916 11:28:36.209199   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <cpu mode='host-passthrough'>
	I0916 11:28:36.209210   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   
	I0916 11:28:36.209216   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   </cpu>
	I0916 11:28:36.209225   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <os>
	I0916 11:28:36.209232   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <type>hvm</type>
	I0916 11:28:36.209256   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <boot dev='cdrom'/>
	I0916 11:28:36.209275   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <boot dev='hd'/>
	I0916 11:28:36.209286   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <bootmenu enable='no'/>
	I0916 11:28:36.209291   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   </os>
	I0916 11:28:36.209296   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   <devices>
	I0916 11:28:36.209304   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <disk type='file' device='cdrom'>
	I0916 11:28:36.209313   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/boot2docker.iso'/>
	I0916 11:28:36.209322   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <target dev='hdc' bus='scsi'/>
	I0916 11:28:36.209327   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <readonly/>
	I0916 11:28:36.209339   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </disk>
	I0916 11:28:36.209347   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <disk type='file' device='disk'>
	I0916 11:28:36.209353   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 11:28:36.209364   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/kubernetes-upgrade-045794.rawdisk'/>
	I0916 11:28:36.209373   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <target dev='hda' bus='virtio'/>
	I0916 11:28:36.209380   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </disk>
	I0916 11:28:36.209387   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <interface type='network'>
	I0916 11:28:36.209392   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <source network='mk-kubernetes-upgrade-045794'/>
	I0916 11:28:36.209398   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <model type='virtio'/>
	I0916 11:28:36.209403   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </interface>
	I0916 11:28:36.209426   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <interface type='network'>
	I0916 11:28:36.209451   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <source network='default'/>
	I0916 11:28:36.209459   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <model type='virtio'/>
	I0916 11:28:36.209468   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </interface>
	I0916 11:28:36.209477   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <serial type='pty'>
	I0916 11:28:36.209487   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <target port='0'/>
	I0916 11:28:36.209497   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </serial>
	I0916 11:28:36.209507   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <console type='pty'>
	I0916 11:28:36.209517   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <target type='serial' port='0'/>
	I0916 11:28:36.209527   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </console>
	I0916 11:28:36.209535   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     <rng model='virtio'>
	I0916 11:28:36.209546   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)       <backend model='random'>/dev/random</backend>
	I0916 11:28:36.209555   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     </rng>
	I0916 11:28:36.209574   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     
	I0916 11:28:36.209587   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)     
	I0916 11:28:36.209594   48743 main.go:141] libmachine: (kubernetes-upgrade-045794)   </devices>
	I0916 11:28:36.209606   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) </domain>
	I0916 11:28:36.209615   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) 
	I0916 11:28:36.213913   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:16:46:6b in network default
	I0916 11:28:36.214467   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Ensuring networks are active...
	I0916 11:28:36.214492   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:36.215182   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Ensuring network default is active
	I0916 11:28:36.215489   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Ensuring network mk-kubernetes-upgrade-045794 is active
	I0916 11:28:36.215988   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Getting domain xml...
	I0916 11:28:36.216606   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Creating domain...
	I0916 11:28:37.470801   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Waiting to get IP...
	I0916 11:28:37.471611   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:37.471974   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:37.472015   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:37.471957   49358 retry.go:31] will retry after 248.356927ms: waiting for machine to come up
	I0916 11:28:37.722415   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:37.722929   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:37.722951   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:37.722886   49358 retry.go:31] will retry after 245.785695ms: waiting for machine to come up
	I0916 11:28:37.970419   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:37.970856   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:37.970884   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:37.970809   49358 retry.go:31] will retry after 389.319315ms: waiting for machine to come up
	I0916 11:28:38.361296   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:38.361751   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:38.361777   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:38.361693   49358 retry.go:31] will retry after 397.229082ms: waiting for machine to come up
	I0916 11:28:38.760326   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:38.760873   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:38.760897   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:38.760815   49358 retry.go:31] will retry after 540.550608ms: waiting for machine to come up
	I0916 11:28:39.303210   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:39.303672   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:39.303702   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:39.303615   49358 retry.go:31] will retry after 911.769997ms: waiting for machine to come up
	I0916 11:28:40.216824   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:40.217336   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:40.217364   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:40.217243   49358 retry.go:31] will retry after 1.15743906s: waiting for machine to come up
	I0916 11:28:41.376623   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:41.377031   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:41.377073   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:41.376987   49358 retry.go:31] will retry after 1.356696789s: waiting for machine to come up
	I0916 11:28:42.735636   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:42.736086   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:42.736116   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:42.736022   49358 retry.go:31] will retry after 1.515953065s: waiting for machine to come up
	I0916 11:28:44.253533   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:44.254024   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:44.254051   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:44.253973   49358 retry.go:31] will retry after 1.630502494s: waiting for machine to come up
	I0916 11:28:45.885706   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:45.886251   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:45.886279   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:45.886204   49358 retry.go:31] will retry after 2.722869048s: waiting for machine to come up
	I0916 11:28:48.610301   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:48.610752   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:48.610782   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:48.610708   49358 retry.go:31] will retry after 2.222691551s: waiting for machine to come up
	I0916 11:28:50.834749   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:50.835351   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:50.835379   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:50.835281   49358 retry.go:31] will retry after 3.03009977s: waiting for machine to come up
	I0916 11:28:53.866702   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:53.867225   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:28:53.867251   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:28:53.867174   49358 retry.go:31] will retry after 5.325314328s: waiting for machine to come up
	I0916 11:28:59.194309   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.194739   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Found IP for machine: 192.168.72.174
	I0916 11:28:59.194756   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Reserving static IP address...
	I0916 11:28:59.194770   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has current primary IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.195103   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-045794", mac: "52:54:00:45:c2:93", ip: "192.168.72.174"} in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.276710   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Reserved static IP address: 192.168.72.174
	I0916 11:28:59.276742   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Getting to WaitForSSH function...
	I0916 11:28:59.276751   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Waiting for SSH to be available...
	I0916 11:28:59.279306   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.279707   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.279734   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.279837   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Using SSH client type: external
	I0916 11:28:59.279875   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa (-rw-------)
	I0916 11:28:59.279920   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:28:59.279944   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | About to run SSH command:
	I0916 11:28:59.279960   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | exit 0
	I0916 11:28:59.401199   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | SSH cmd err, output: <nil>: 
	I0916 11:28:59.401486   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) KVM machine creation complete!
	I0916 11:28:59.401860   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetConfigRaw
	I0916 11:28:59.402409   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:28:59.402613   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:28:59.402761   48743 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 11:28:59.402774   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetState
	I0916 11:28:59.404117   48743 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 11:28:59.404131   48743 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 11:28:59.404136   48743 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 11:28:59.404142   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:28:59.406551   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.406958   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.406984   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.407126   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:28:59.407293   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.407458   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.407579   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:28:59.407705   48743 main.go:141] libmachine: Using SSH client type: native
	I0916 11:28:59.407893   48743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:28:59.407903   48743 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 11:28:59.508776   48743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:28:59.508828   48743 main.go:141] libmachine: Detecting the provisioner...
	I0916 11:28:59.508838   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:28:59.512207   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.512641   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.512666   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.512953   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:28:59.513177   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.513366   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.513522   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:28:59.513711   48743 main.go:141] libmachine: Using SSH client type: native
	I0916 11:28:59.513894   48743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:28:59.513911   48743 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 11:28:59.614659   48743 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 11:28:59.614760   48743 main.go:141] libmachine: found compatible host: buildroot
	I0916 11:28:59.614773   48743 main.go:141] libmachine: Provisioning with buildroot...
	I0916 11:28:59.614784   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:28:59.615016   48743 buildroot.go:166] provisioning hostname "kubernetes-upgrade-045794"
	I0916 11:28:59.615050   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:28:59.615222   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:28:59.619230   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.619635   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.619674   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.619792   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:28:59.619961   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.620111   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.620244   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:28:59.620415   48743 main.go:141] libmachine: Using SSH client type: native
	I0916 11:28:59.620643   48743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:28:59.620661   48743 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-045794 && echo "kubernetes-upgrade-045794" | sudo tee /etc/hostname
	I0916 11:28:59.736675   48743 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-045794
	
	I0916 11:28:59.736728   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:28:59.739313   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.739712   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.739753   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.739932   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:28:59.740112   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.740285   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:28:59.740411   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:28:59.740588   48743 main.go:141] libmachine: Using SSH client type: native
	I0916 11:28:59.740817   48743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:28:59.740842   48743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-045794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-045794/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-045794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:28:59.856047   48743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:28:59.856077   48743 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:28:59.856107   48743 buildroot.go:174] setting up certificates
	I0916 11:28:59.856118   48743 provision.go:84] configureAuth start
	I0916 11:28:59.856127   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:28:59.856385   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:28:59.859133   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.859518   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.859545   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.859773   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:28:59.862148   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.862482   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:28:59.862510   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:28:59.862621   48743 provision.go:143] copyHostCerts
	I0916 11:28:59.862701   48743 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:28:59.862726   48743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:28:59.862794   48743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:28:59.862931   48743 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:28:59.862944   48743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:28:59.862974   48743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:28:59.863069   48743 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:28:59.863080   48743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:28:59.863108   48743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:28:59.863191   48743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-045794 san=[127.0.0.1 192.168.72.174 kubernetes-upgrade-045794 localhost minikube]
	I0916 11:29:00.051479   48743 provision.go:177] copyRemoteCerts
	I0916 11:29:00.051549   48743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:29:00.051578   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:29:00.054394   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.054728   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.054757   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.054903   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:29:00.055049   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.055185   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:29:00.055287   48743 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:29:00.143682   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:29:00.172628   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:29:00.198992   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0916 11:29:00.227105   48743 provision.go:87] duration metric: took 370.97333ms to configureAuth
	I0916 11:29:00.227139   48743 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:29:00.227342   48743 config.go:182] Loaded profile config "kubernetes-upgrade-045794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 11:29:00.227446   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:29:00.230187   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.230622   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.230663   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.230813   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:29:00.231028   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.231234   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.231403   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:29:00.231574   48743 main.go:141] libmachine: Using SSH client type: native
	I0916 11:29:00.231785   48743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:29:00.231807   48743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:29:00.486463   48743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:29:00.486507   48743 main.go:141] libmachine: Checking connection to Docker...
	I0916 11:29:00.486520   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetURL
	I0916 11:29:00.487796   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Using libvirt version 6000000
	I0916 11:29:00.489951   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.490309   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.490339   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.490547   48743 main.go:141] libmachine: Docker is up and running!
	I0916 11:29:00.490567   48743 main.go:141] libmachine: Reticulating splines...
	I0916 11:29:00.490575   48743 client.go:171] duration metric: took 24.683408362s to LocalClient.Create
	I0916 11:29:00.490600   48743 start.go:167] duration metric: took 24.683474304s to libmachine.API.Create "kubernetes-upgrade-045794"
	I0916 11:29:00.490613   48743 start.go:293] postStartSetup for "kubernetes-upgrade-045794" (driver="kvm2")
	I0916 11:29:00.490626   48743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:29:00.490655   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:29:00.490891   48743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:29:00.490927   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:29:00.492899   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.493282   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.493312   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.493439   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:29:00.493604   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.493745   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:29:00.493865   48743 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:29:00.575806   48743 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:29:00.580639   48743 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:29:00.580666   48743 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:29:00.580746   48743 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:29:00.580867   48743 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:29:00.581002   48743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:29:00.591172   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:29:00.617110   48743 start.go:296] duration metric: took 126.484482ms for postStartSetup
	I0916 11:29:00.617202   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetConfigRaw
	I0916 11:29:00.617970   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:29:00.620603   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.621014   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.621048   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.621318   48743 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/config.json ...
	I0916 11:29:00.621578   48743 start.go:128] duration metric: took 24.839633328s to createHost
	I0916 11:29:00.621609   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:29:00.623752   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.624074   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.624105   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.624305   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:29:00.624493   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.624671   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.624803   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:29:00.624941   48743 main.go:141] libmachine: Using SSH client type: native
	I0916 11:29:00.625147   48743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:29:00.625160   48743 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:29:00.726113   48743 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726486140.684296052
	
	I0916 11:29:00.726141   48743 fix.go:216] guest clock: 1726486140.684296052
	I0916 11:29:00.726148   48743 fix.go:229] Guest: 2024-09-16 11:29:00.684296052 +0000 UTC Remote: 2024-09-16 11:29:00.621593806 +0000 UTC m=+74.075000619 (delta=62.702246ms)
	I0916 11:29:00.726168   48743 fix.go:200] guest clock delta is within tolerance: 62.702246ms
	I0916 11:29:00.726173   48743 start.go:83] releasing machines lock for "kubernetes-upgrade-045794", held for 24.944393259s
	I0916 11:29:00.726196   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:29:00.726445   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:29:00.729028   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.729390   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.729422   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.729588   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:29:00.730097   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:29:00.730295   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:29:00.730381   48743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:29:00.730441   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:29:00.730529   48743 ssh_runner.go:195] Run: cat /version.json
	I0916 11:29:00.730550   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:29:00.733104   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.733186   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.733531   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.733586   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:00.733614   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.733646   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:00.733721   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:29:00.733782   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:29:00.733886   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.733960   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:29:00.734060   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:29:00.734132   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:29:00.734207   48743 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:29:00.734260   48743 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:29:00.810692   48743 ssh_runner.go:195] Run: systemctl --version
	I0916 11:29:00.839625   48743 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:29:01.004099   48743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 11:29:01.010762   48743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:29:01.010859   48743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:29:01.029223   48743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 11:29:01.029248   48743 start.go:495] detecting cgroup driver to use...
	I0916 11:29:01.029318   48743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:29:01.049525   48743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:29:01.069534   48743 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:29:01.069593   48743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:29:01.087660   48743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:29:01.109302   48743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:29:01.244400   48743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:29:01.440970   48743 docker.go:233] disabling docker service ...
	I0916 11:29:01.441045   48743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:29:01.460140   48743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:29:01.479108   48743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:29:01.625957   48743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:29:01.770780   48743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:29:01.786973   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:29:01.808383   48743 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 11:29:01.808452   48743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:29:01.825754   48743 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:29:01.825840   48743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:29:01.840747   48743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:29:01.853023   48743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:29:01.864249   48743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:29:01.875583   48743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:29:01.885620   48743 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:29:01.885692   48743 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 11:29:01.899931   48743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:29:01.910830   48743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:29:02.040381   48743 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:29:02.153800   48743 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:29:02.153872   48743 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:29:02.159231   48743 start.go:563] Will wait 60s for crictl version
	I0916 11:29:02.159306   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:02.163645   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:29:02.204693   48743 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:29:02.204791   48743 ssh_runner.go:195] Run: crio --version
	I0916 11:29:02.241257   48743 ssh_runner.go:195] Run: crio --version
	I0916 11:29:02.272670   48743 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0916 11:29:02.274118   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:29:02.277486   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:02.277895   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:28:51 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:29:02.277927   48743 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:29:02.278167   48743 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0916 11:29:02.282756   48743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:29:02.295861   48743 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-045794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:29:02.295979   48743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 11:29:02.296021   48743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:29:02.334051   48743 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:29:02.334129   48743 ssh_runner.go:195] Run: which lz4
	I0916 11:29:02.338474   48743 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:29:02.343100   48743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:29:02.343140   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 11:29:03.998585   48743 crio.go:462] duration metric: took 1.660148144s to copy over tarball
	I0916 11:29:03.998662   48743 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:29:06.474421   48743 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.475727553s)
	I0916 11:29:06.474456   48743 crio.go:469] duration metric: took 2.475833268s to extract the tarball
	I0916 11:29:06.474467   48743 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:29:06.518870   48743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:29:06.565454   48743 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:29:06.565478   48743 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:29:06.565517   48743 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:29:06.565552   48743 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:06.565574   48743 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:06.565619   48743 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:06.565688   48743 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:06.565716   48743 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:29:06.565689   48743 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:06.565949   48743 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:29:06.567121   48743 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:29:06.567238   48743 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:06.567249   48743 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:29:06.567265   48743 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:29:06.567266   48743 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:06.567314   48743 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:06.567338   48743 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:06.567120   48743 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:06.734430   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 11:29:06.757916   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:06.758877   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:06.763995   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:06.775069   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:06.779735   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 11:29:06.781820   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:06.807960   48743 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:29:06.808002   48743 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:29:06.808045   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:06.911996   48743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:29:06.943293   48743 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:29:06.943342   48743 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:06.943368   48743 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:29:06.943388   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:06.943405   48743 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:06.943446   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:06.949135   48743 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:29:06.949179   48743 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:06.949185   48743 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:29:06.949219   48743 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:29:06.949267   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:06.949225   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:06.949431   48743 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:29:06.949471   48743 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:06.949508   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:06.973609   48743 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:29:06.973672   48743 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:06.973697   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:29:06.973716   48743 ssh_runner.go:195] Run: which crictl
	I0916 11:29:07.100894   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:07.100932   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:07.100994   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:29:07.101079   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:07.101087   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:07.101163   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:07.101181   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:29:07.237836   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:07.237853   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:07.261102   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:07.261153   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:07.261238   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:07.263165   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:29:07.263470   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:29:07.411153   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:29:07.411335   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:29:07.434643   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:29:07.434689   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:29:07.434722   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:29:07.434771   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:29:07.446552   48743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:29:07.546087   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:29:07.546531   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:29:07.574741   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:29:07.579761   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:29:07.579846   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:29:07.590645   48743 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:29:07.590717   48743 cache_images.go:92] duration metric: took 1.025224546s to LoadCachedImages
	W0916 11:29:07.590812   48743 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3851/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0916 11:29:07.590837   48743 kubeadm.go:934] updating node { 192.168.72.174 8443 v1.20.0 crio true true} ...
	I0916 11:29:07.590946   48743 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-045794 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:29:07.591023   48743 ssh_runner.go:195] Run: crio config
	I0916 11:29:07.654101   48743 cni.go:84] Creating CNI manager for ""
	I0916 11:29:07.654129   48743 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:29:07.654143   48743 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:29:07.654168   48743 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-045794 NodeName:kubernetes-upgrade-045794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:29:07.654360   48743 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-045794"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:29:07.654433   48743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:29:07.665869   48743 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:29:07.665950   48743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:29:07.680690   48743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0916 11:29:07.700855   48743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:29:07.719466   48743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0916 11:29:07.740103   48743 ssh_runner.go:195] Run: grep 192.168.72.174	control-plane.minikube.internal$ /etc/hosts
	I0916 11:29:07.745505   48743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:29:07.763036   48743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:29:07.920779   48743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:29:07.942582   48743 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794 for IP: 192.168.72.174
	I0916 11:29:07.942604   48743 certs.go:194] generating shared ca certs ...
	I0916 11:29:07.942625   48743 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:07.942784   48743 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:29:07.942838   48743 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:29:07.942852   48743 certs.go:256] generating profile certs ...
	I0916 11:29:07.942921   48743 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.key
	I0916 11:29:07.942955   48743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.crt with IP's: []
	I0916 11:29:08.114665   48743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.crt ...
	I0916 11:29:08.114698   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.crt: {Name:mk61299a2dbd64203cf69ae3649adf6c53690a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:08.145793   48743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.key ...
	I0916 11:29:08.145874   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.key: {Name:mk31db45d521c70d0c249d478049f40db3316b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:08.146038   48743 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key.435d8532
	I0916 11:29:08.146061   48743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt.435d8532 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.174]
	I0916 11:29:08.356917   48743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt.435d8532 ...
	I0916 11:29:08.356962   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt.435d8532: {Name:mk4e33a63bfd3816ef0953e17145be53dc79618c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:08.357167   48743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key.435d8532 ...
	I0916 11:29:08.357194   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key.435d8532: {Name:mk3a1aa3c9f657a03d8ffabca279620c9743e43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:08.357298   48743 certs.go:381] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt.435d8532 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt
	I0916 11:29:08.357397   48743 certs.go:385] copying /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key.435d8532 -> /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key
	I0916 11:29:08.357476   48743 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.key
	I0916 11:29:08.357497   48743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.crt with IP's: []
	I0916 11:29:08.441242   48743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.crt ...
	I0916 11:29:08.441272   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.crt: {Name:mk476c0bfb43da168194cb70bb6ebb5b4d1159d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:08.441447   48743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.key ...
	I0916 11:29:08.441464   48743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.key: {Name:mk48c0c56ceae161d0c97022db5aa8f92430200a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:29:08.441656   48743 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:29:08.441706   48743 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:29:08.441718   48743 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:29:08.441751   48743 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:29:08.441784   48743 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:29:08.441816   48743 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:29:08.441874   48743 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:29:08.442471   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:29:08.470146   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:29:08.494684   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:29:08.518996   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:29:08.543940   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 11:29:08.573632   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:29:08.609173   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:29:08.634120   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:29:08.658914   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:29:08.685140   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:29:08.708954   48743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:29:08.732813   48743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:29:08.750149   48743 ssh_runner.go:195] Run: openssl version
	I0916 11:29:08.755875   48743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:29:08.770138   48743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:29:08.774868   48743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:29:08.774935   48743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:29:08.781041   48743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:29:08.792808   48743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:29:08.804612   48743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:29:08.809398   48743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:29:08.809455   48743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:29:08.815552   48743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:29:08.830178   48743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:29:08.844507   48743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:29:08.849417   48743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:29:08.849480   48743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:29:08.855251   48743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:29:08.867035   48743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:29:08.873281   48743 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:29:08.873341   48743 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-045794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:29:08.873448   48743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:29:08.873509   48743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:29:08.927384   48743 cri.go:89] found id: ""
	I0916 11:29:08.927463   48743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:29:08.940254   48743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:29:08.950325   48743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:29:08.960259   48743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:29:08.960285   48743 kubeadm.go:157] found existing configuration files:
	
	I0916 11:29:08.960332   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:29:08.969723   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:29:08.969789   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:29:08.979335   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:29:08.989316   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:29:08.989381   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:29:09.000037   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:29:09.010436   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:29:09.010505   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:29:09.020876   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:29:09.030691   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:29:09.030748   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:29:09.040642   48743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 11:29:09.327519   48743 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:31:07.805237   48743 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0916 11:31:07.805335   48743 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0916 11:31:07.807032   48743 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:31:07.807103   48743 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:31:07.807175   48743 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:31:07.807288   48743 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:31:07.807384   48743 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:31:07.807443   48743 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:31:07.809338   48743 out.go:235]   - Generating certificates and keys ...
	I0916 11:31:07.809421   48743 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:31:07.809492   48743 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:31:07.809566   48743 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:31:07.809645   48743 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:31:07.809699   48743 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:31:07.809742   48743 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:31:07.809828   48743 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:31:07.810007   48743 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-045794 localhost] and IPs [192.168.72.174 127.0.0.1 ::1]
	I0916 11:31:07.810081   48743 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:31:07.810286   48743 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-045794 localhost] and IPs [192.168.72.174 127.0.0.1 ::1]
	I0916 11:31:07.810375   48743 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:31:07.810461   48743 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:31:07.810527   48743 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:31:07.810600   48743 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:31:07.810669   48743 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:31:07.810718   48743 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:31:07.810772   48743 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:31:07.810819   48743 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:31:07.810923   48743 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:31:07.811031   48743 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:31:07.811072   48743 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:31:07.811163   48743 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:31:07.813613   48743 out.go:235]   - Booting up control plane ...
	I0916 11:31:07.813706   48743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:31:07.813790   48743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:31:07.813881   48743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:31:07.813959   48743 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:31:07.814085   48743 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:31:07.814165   48743 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0916 11:31:07.814250   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:31:07.814461   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:31:07.814568   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:31:07.814796   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:31:07.814895   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:31:07.815128   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:31:07.815230   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:31:07.815406   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:31:07.815466   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:31:07.815690   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:31:07.815705   48743 kubeadm.go:310] 
	I0916 11:31:07.815739   48743 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0916 11:31:07.815773   48743 kubeadm.go:310] 		timed out waiting for the condition
	I0916 11:31:07.815783   48743 kubeadm.go:310] 
	I0916 11:31:07.815810   48743 kubeadm.go:310] 	This error is likely caused by:
	I0916 11:31:07.815838   48743 kubeadm.go:310] 		- The kubelet is not running
	I0916 11:31:07.815928   48743 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0916 11:31:07.815936   48743 kubeadm.go:310] 
	I0916 11:31:07.816042   48743 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0916 11:31:07.816093   48743 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0916 11:31:07.816133   48743 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0916 11:31:07.816139   48743 kubeadm.go:310] 
	I0916 11:31:07.816231   48743 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0916 11:31:07.816302   48743 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0916 11:31:07.816309   48743 kubeadm.go:310] 
	I0916 11:31:07.816397   48743 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0916 11:31:07.816470   48743 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0916 11:31:07.816530   48743 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0916 11:31:07.816593   48743 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0916 11:31:07.816646   48743 kubeadm.go:310] 
	W0916 11:31:07.816718   48743 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-045794 localhost] and IPs [192.168.72.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-045794 localhost] and IPs [192.168.72.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-045794 localhost] and IPs [192.168.72.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-045794 localhost] and IPs [192.168.72.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0916 11:31:07.816758   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0916 11:31:08.543086   48743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:31:08.558140   48743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:31:08.568014   48743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:31:08.568037   48743 kubeadm.go:157] found existing configuration files:
	
	I0916 11:31:08.568084   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:31:08.577405   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:31:08.577469   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:31:08.586858   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:31:08.595860   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:31:08.595933   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:31:08.605608   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:31:08.614831   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:31:08.614901   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:31:08.624119   48743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:31:08.632856   48743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:31:08.632919   48743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:31:08.642080   48743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 11:31:08.851582   48743 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:33:05.214481   48743 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0916 11:33:05.214641   48743 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0916 11:33:05.216452   48743 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:33:05.216515   48743 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:33:05.216625   48743 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:33:05.216769   48743 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:33:05.216895   48743 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:33:05.216989   48743 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:33:05.218908   48743 out.go:235]   - Generating certificates and keys ...
	I0916 11:33:05.219035   48743 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:33:05.219132   48743 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:33:05.219221   48743 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 11:33:05.219299   48743 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 11:33:05.219404   48743 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 11:33:05.219482   48743 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 11:33:05.219553   48743 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 11:33:05.219618   48743 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 11:33:05.219715   48743 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 11:33:05.219837   48743 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 11:33:05.219915   48743 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 11:33:05.219988   48743 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:33:05.220074   48743 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:33:05.220155   48743 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:33:05.220250   48743 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:33:05.220327   48743 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:33:05.220443   48743 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:33:05.220536   48743 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:33:05.220597   48743 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:33:05.220666   48743 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:33:05.222195   48743 out.go:235]   - Booting up control plane ...
	I0916 11:33:05.222290   48743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:33:05.222409   48743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:33:05.222515   48743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:33:05.222630   48743 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:33:05.222827   48743 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:33:05.222886   48743 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0916 11:33:05.222981   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:33:05.223191   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:33:05.223280   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:33:05.223502   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:33:05.223587   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:33:05.223830   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:33:05.223902   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:33:05.224141   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:33:05.224245   48743 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 11:33:05.224478   48743 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 11:33:05.224500   48743 kubeadm.go:310] 
	I0916 11:33:05.224556   48743 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0916 11:33:05.224627   48743 kubeadm.go:310] 		timed out waiting for the condition
	I0916 11:33:05.224640   48743 kubeadm.go:310] 
	I0916 11:33:05.224686   48743 kubeadm.go:310] 	This error is likely caused by:
	I0916 11:33:05.224733   48743 kubeadm.go:310] 		- The kubelet is not running
	I0916 11:33:05.224874   48743 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0916 11:33:05.224896   48743 kubeadm.go:310] 
	I0916 11:33:05.225056   48743 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0916 11:33:05.225104   48743 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0916 11:33:05.225172   48743 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0916 11:33:05.225184   48743 kubeadm.go:310] 
	I0916 11:33:05.225328   48743 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0916 11:33:05.225448   48743 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0916 11:33:05.225458   48743 kubeadm.go:310] 
	I0916 11:33:05.225600   48743 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0916 11:33:05.225740   48743 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0916 11:33:05.225839   48743 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0916 11:33:05.225949   48743 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0916 11:33:05.226018   48743 kubeadm.go:310] 
	I0916 11:33:05.226025   48743 kubeadm.go:394] duration metric: took 3m56.352686993s to StartCluster
	I0916 11:33:05.226073   48743 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:33:05.226149   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:33:05.270530   48743 cri.go:89] found id: ""
	I0916 11:33:05.270561   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.270573   48743 logs.go:278] No container was found matching "kube-apiserver"
	I0916 11:33:05.270581   48743 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 11:33:05.270654   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:33:05.311955   48743 cri.go:89] found id: ""
	I0916 11:33:05.311985   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.311996   48743 logs.go:278] No container was found matching "etcd"
	I0916 11:33:05.312004   48743 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 11:33:05.312075   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:33:05.368104   48743 cri.go:89] found id: ""
	I0916 11:33:05.368136   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.368148   48743 logs.go:278] No container was found matching "coredns"
	I0916 11:33:05.368156   48743 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:33:05.368224   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:33:05.409186   48743 cri.go:89] found id: ""
	I0916 11:33:05.409220   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.409231   48743 logs.go:278] No container was found matching "kube-scheduler"
	I0916 11:33:05.409238   48743 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:33:05.409300   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:33:05.451031   48743 cri.go:89] found id: ""
	I0916 11:33:05.451057   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.451065   48743 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:33:05.451071   48743 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:33:05.451128   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:33:05.486539   48743 cri.go:89] found id: ""
	I0916 11:33:05.486567   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.486574   48743 logs.go:278] No container was found matching "kube-controller-manager"
	I0916 11:33:05.486581   48743 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 11:33:05.486652   48743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:33:05.531115   48743 cri.go:89] found id: ""
	I0916 11:33:05.531143   48743 logs.go:276] 0 containers: []
	W0916 11:33:05.531151   48743 logs.go:278] No container was found matching "kindnet"
	I0916 11:33:05.531162   48743 logs.go:123] Gathering logs for dmesg ...
	I0916 11:33:05.531178   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:33:05.547128   48743 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:33:05.547157   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:33:05.710709   48743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:33:05.710734   48743 logs.go:123] Gathering logs for CRI-O ...
	I0916 11:33:05.710760   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 11:33:05.826944   48743 logs.go:123] Gathering logs for container status ...
	I0916 11:33:05.826987   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:33:05.875007   48743 logs.go:123] Gathering logs for kubelet ...
	I0916 11:33:05.875047   48743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:33:05.927658   48743 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0916 11:33:05.927731   48743 out.go:270] * 
	* 
	W0916 11:33:05.927784   48743 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 11:33:05.927798   48743 out.go:270] * 
	* 
	W0916 11:33:05.928717   48743 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:33:05.932038   48743 out.go:201] 
	W0916 11:33:05.933370   48743 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 11:33:05.933422   48743 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0916 11:33:05.933454   48743 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0916 11:33:05.934982   48743 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-045794 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-045794
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-045794: (1.42826658s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-045794 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-045794 status --format={{.Host}}: exit status 7 (66.413864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-045794 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-045794 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.714143654s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-045794 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-045794 version --output=json: fork/exec /usr/local/bin/kubectl: exec format error (480.981µs)
version_upgrade_test.go:250: error running kubectl: fork/exec /usr/local/bin/kubectl: exec format error
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-16 11:34:16.199378538 +0000 UTC m=+4370.422497007
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-045794 -n kubernetes-upgrade-045794
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-045794 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-045794 logs -n 25: (1.615788299s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-668924 sudo           | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	| start   | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-682717             | running-upgrade-682717    | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	| start   | -p force-systemd-flag-716028          | force-systemd-flag-716028 | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:31 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-668924 sudo           | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-668924                | NoKubernetes-668924       | jenkins | v1.34.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:30 UTC |
	| start   | -p stopped-upgrade-153123             | minikube                  | jenkins | v1.26.0 | 16 Sep 24 11:30 UTC | 16 Sep 24 11:32 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-716028 ssh cat     | force-systemd-flag-716028 | jenkins | v1.34.0 | 16 Sep 24 11:31 UTC | 16 Sep 24 11:31 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-716028          | force-systemd-flag-716028 | jenkins | v1.34.0 | 16 Sep 24 11:31 UTC | 16 Sep 24 11:31 UTC |
	| start   | -p cert-options-087952                | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:31 UTC | 16 Sep 24 11:32 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-153123 stop           | minikube                  | jenkins | v1.26.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	| start   | -p stopped-upgrade-153123             | stopped-upgrade-153123    | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-087952 ssh               | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-087952 -- sudo        | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-087952                | cert-options-087952       | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:32 UTC |
	| start   | -p pause-902210 --memory=2048         | pause-902210              | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:33 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-849615             | cert-expiration-849615    | jenkins | v1.34.0 | 16 Sep 24 11:32 UTC | 16 Sep 24 11:33 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-153123             | stopped-upgrade-153123    | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC | 16 Sep 24 11:33 UTC |
	| start   | -p auto-957670 --memory=3072          | auto-957670               | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-045794          | kubernetes-upgrade-045794 | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC | 16 Sep 24 11:33 UTC |
	| start   | -p kubernetes-upgrade-045794          | kubernetes-upgrade-045794 | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC | 16 Sep 24 11:34 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-902210                       | pause-902210              | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-849615             | cert-expiration-849615    | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC | 16 Sep 24 11:33 UTC |
	| start   | -p kindnet-957670                     | kindnet-957670            | jenkins | v1.34.0 | 16 Sep 24 11:33 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:33:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:33:47.644068   53823 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:33:47.644188   53823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:33:47.644198   53823 out.go:358] Setting ErrFile to fd 2...
	I0916 11:33:47.644204   53823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:33:47.644527   53823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:33:47.645311   53823 out.go:352] Setting JSON to false
	I0916 11:33:47.646621   53823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4578,"bootTime":1726481850,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:33:47.646750   53823 start.go:139] virtualization: kvm guest
	I0916 11:33:47.649014   53823 out.go:177] * [kindnet-957670] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:33:47.650272   53823 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:33:47.650277   53823 notify.go:220] Checking for updates...
	I0916 11:33:47.651619   53823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:33:47.652877   53823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:33:47.654254   53823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:33:47.655514   53823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:33:47.657089   53823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:33:47.658964   53823 config.go:182] Loaded profile config "auto-957670": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:33:47.659107   53823 config.go:182] Loaded profile config "kubernetes-upgrade-045794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:33:47.659287   53823 config.go:182] Loaded profile config "pause-902210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:33:47.659393   53823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:33:47.699201   53823 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 11:33:47.700638   53823 start.go:297] selected driver: kvm2
	I0916 11:33:47.700652   53823 start.go:901] validating driver "kvm2" against <nil>
	I0916 11:33:47.700667   53823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:33:47.701736   53823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:33:47.701850   53823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 11:33:47.717584   53823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 11:33:47.717656   53823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:33:47.718031   53823 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:33:47.718075   53823 cni.go:84] Creating CNI manager for "kindnet"
	I0916 11:33:47.718082   53823 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:33:47.718150   53823 start.go:340] cluster config:
	{Name:kindnet-957670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-957670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:33:47.718272   53823 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:33:47.720187   53823 out.go:177] * Starting "kindnet-957670" primary control-plane node in "kindnet-957670" cluster
	I0916 11:33:46.836814   53156 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:33:47.027979   53156 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:33:47.200036   53156 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:33:47.200324   53156 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:33:47.412606   53156 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:33:47.583586   53156 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:33:47.696881   53156 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:33:47.901631   53156 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:33:48.014624   53156 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:33:48.015379   53156 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:33:48.017900   53156 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:33:48.019790   53156 out.go:235]   - Booting up control plane ...
	I0916 11:33:48.019907   53156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:33:48.019994   53156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:33:48.020083   53156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:33:48.046311   53156 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:33:48.053478   53156 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:33:48.053569   53156 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:33:48.184893   53156 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:33:48.185067   53156 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:33:49.186521   53156 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001805655s
	I0916 11:33:49.186658   53156 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:33:49.234759   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:49.235329   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:33:49.235356   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:33:49.235286   53600 retry.go:31] will retry after 2.208324472s: waiting for machine to come up
	I0916 11:33:51.446562   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:51.447015   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | unable to find current IP address of domain kubernetes-upgrade-045794 in network mk-kubernetes-upgrade-045794
	I0916 11:33:51.447045   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | I0916 11:33:51.446969   53600 retry.go:31] will retry after 3.397124752s: waiting for machine to come up
	I0916 11:33:47.721311   53823 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:33:47.721348   53823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 11:33:47.721361   53823 cache.go:56] Caching tarball of preloaded images
	I0916 11:33:47.721459   53823 preload.go:172] Found /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 11:33:47.721472   53823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 11:33:47.721579   53823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kindnet-957670/config.json ...
	I0916 11:33:47.721605   53823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kindnet-957670/config.json: {Name:mka9082035bf12fafa49d2e859f9b6d9454b47f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:33:47.721767   53823 start.go:360] acquireMachinesLock for kindnet-957670: {Name:mk413037138532b69f77e412c3960796c09316f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 11:33:54.188831   53156 kubeadm.go:310] [api-check] The API server is healthy after 5.001531794s
	I0916 11:33:54.204732   53156 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:33:54.219401   53156 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:33:54.257678   53156 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:33:54.257925   53156 kubeadm.go:310] [mark-control-plane] Marking the node auto-957670 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:33:54.272869   53156 kubeadm.go:310] [bootstrap-token] Using token: ma9abu.m63doxxhx12tx6np
	I0916 11:33:54.274330   53156 out.go:235]   - Configuring RBAC rules ...
	I0916 11:33:54.274467   53156 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:33:54.280536   53156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:33:54.290594   53156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:33:54.296201   53156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:33:54.299487   53156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:33:54.305977   53156 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:33:54.598454   53156 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:33:55.023757   53156 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:33:55.596461   53156 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:33:55.596484   53156 kubeadm.go:310] 
	I0916 11:33:55.596551   53156 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:33:55.596595   53156 kubeadm.go:310] 
	I0916 11:33:55.596722   53156 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:33:55.596736   53156 kubeadm.go:310] 
	I0916 11:33:55.596771   53156 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:33:55.596848   53156 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:33:55.596941   53156 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:33:55.596953   53156 kubeadm.go:310] 
	I0916 11:33:55.597037   53156 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:33:55.597049   53156 kubeadm.go:310] 
	I0916 11:33:55.597117   53156 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:33:55.597136   53156 kubeadm.go:310] 
	I0916 11:33:55.597212   53156 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:33:55.597325   53156 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:33:55.597445   53156 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:33:55.597460   53156 kubeadm.go:310] 
	I0916 11:33:55.597565   53156 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:33:55.597686   53156 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:33:55.597701   53156 kubeadm.go:310] 
	I0916 11:33:55.597838   53156 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ma9abu.m63doxxhx12tx6np \
	I0916 11:33:55.598001   53156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 \
	I0916 11:33:55.598040   53156 kubeadm.go:310] 	--control-plane 
	I0916 11:33:55.598051   53156 kubeadm.go:310] 
	I0916 11:33:55.598162   53156 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:33:55.598172   53156 kubeadm.go:310] 
	I0916 11:33:55.598281   53156 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ma9abu.m63doxxhx12tx6np \
	I0916 11:33:55.598407   53156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:18e2e9ba1c06caae6264fad234e64aee68efc10986a3ab0ed9e448768962daf7 
	I0916 11:33:55.599034   53156 kubeadm.go:310] W0916 11:33:44.609633     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:33:55.599370   53156 kubeadm.go:310] W0916 11:33:44.610733     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:33:55.599525   53156 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:33:55.599548   53156 cni.go:84] Creating CNI manager for ""
	I0916 11:33:55.599558   53156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:33:55.601604   53156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 11:33:56.350679   53522 start.go:364] duration metric: took 23.202304158s to acquireMachinesLock for "pause-902210"
	I0916 11:33:56.350725   53522 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:33:56.350735   53522 fix.go:54] fixHost starting: 
	I0916 11:33:56.351199   53522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:33:56.351251   53522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:33:56.369392   53522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I0916 11:33:56.369892   53522 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:33:56.370830   53522 main.go:141] libmachine: Using API Version  1
	I0916 11:33:56.370850   53522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:33:56.371452   53522 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:33:56.371756   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:33:56.371924   53522 main.go:141] libmachine: (pause-902210) Calling .GetState
	I0916 11:33:56.373495   53522 fix.go:112] recreateIfNeeded on pause-902210: state=Running err=<nil>
	W0916 11:33:56.373515   53522 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:33:56.375259   53522 out.go:177] * Updating the running kvm2 "pause-902210" VM ...
	I0916 11:33:55.603092   53156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 11:33:55.615182   53156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 11:33:55.646224   53156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:33:55.646344   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:55.646416   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-957670 minikube.k8s.io/updated_at=2024_09_16T11_33_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=auto-957670 minikube.k8s.io/primary=true
	I0916 11:33:55.683695   53156 ops.go:34] apiserver oom_adj: -16
	I0916 11:33:55.798814   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:56.298926   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:54.847994   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.848557   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has current primary IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.848576   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Found IP for machine: 192.168.72.174
	I0916 11:33:54.848589   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Reserving static IP address...
	I0916 11:33:54.849093   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Reserved static IP address: 192.168.72.174
	I0916 11:33:54.849149   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-045794", mac: "52:54:00:45:c2:93", ip: "192.168.72.174"} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:54.849162   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Waiting for SSH to be available...
	I0916 11:33:54.849202   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | skip adding static IP to network mk-kubernetes-upgrade-045794 - found existing host DHCP lease matching {name: "kubernetes-upgrade-045794", mac: "52:54:00:45:c2:93", ip: "192.168.72.174"}
	I0916 11:33:54.849220   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Getting to WaitForSSH function...
	I0916 11:33:54.851586   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.852004   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:54.852038   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.852194   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Using SSH client type: external
	I0916 11:33:54.852225   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa (-rw-------)
	I0916 11:33:54.852264   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 11:33:54.852281   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | About to run SSH command:
	I0916 11:33:54.852294   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | exit 0
	I0916 11:33:54.981489   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | SSH cmd err, output: <nil>: 
	I0916 11:33:54.981871   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetConfigRaw
	I0916 11:33:54.982548   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:33:54.985166   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.985651   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:54.985685   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.985839   53296 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/config.json ...
	I0916 11:33:54.986030   53296 machine.go:93] provisionDockerMachine start ...
	I0916 11:33:54.986050   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:33:54.986258   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:54.988317   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.988653   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:54.988683   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:54.988829   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:54.989014   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:54.989195   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:54.989349   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:54.989531   53296 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:54.989748   53296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:33:54.989763   53296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:33:55.102685   53296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 11:33:55.102726   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:33:55.102960   53296 buildroot.go:166] provisioning hostname "kubernetes-upgrade-045794"
	I0916 11:33:55.103000   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:33:55.103193   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:55.106089   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.106470   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:55.106503   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.106672   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:55.106847   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.107021   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.107148   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:55.107296   53296 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:55.107455   53296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:33:55.107466   53296 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-045794 && echo "kubernetes-upgrade-045794" | sudo tee /etc/hostname
	I0916 11:33:55.228580   53296 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-045794
	
	I0916 11:33:55.228608   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:55.231670   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.232010   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:55.232041   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.232215   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:55.232420   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.232584   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.232731   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:55.232930   53296 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:55.233152   53296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:33:55.233176   53296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-045794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-045794/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-045794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:33:55.346764   53296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:33:55.346792   53296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:33:55.346845   53296 buildroot.go:174] setting up certificates
	I0916 11:33:55.346861   53296 provision.go:84] configureAuth start
	I0916 11:33:55.346872   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetMachineName
	I0916 11:33:55.347162   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:33:55.349791   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.350151   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:55.350175   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.350354   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:55.352509   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.352843   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:55.352871   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.352951   53296 provision.go:143] copyHostCerts
	I0916 11:33:55.353008   53296 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:33:55.353018   53296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:33:55.353072   53296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:33:55.353186   53296 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:33:55.353195   53296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:33:55.353219   53296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:33:55.353306   53296 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:33:55.353318   53296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:33:55.353347   53296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:33:55.353405   53296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-045794 san=[127.0.0.1 192.168.72.174 kubernetes-upgrade-045794 localhost minikube]
	I0916 11:33:55.685184   53296 provision.go:177] copyRemoteCerts
	I0916 11:33:55.685257   53296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:33:55.685293   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:55.688134   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.688503   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:55.688536   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.688749   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:55.688955   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.689120   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:55.689277   53296 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:33:55.775896   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:33:55.806322   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0916 11:33:55.837680   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:33:55.865686   53296 provision.go:87] duration metric: took 518.809353ms to configureAuth
	I0916 11:33:55.865719   53296 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:33:55.865952   53296 config.go:182] Loaded profile config "kubernetes-upgrade-045794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:33:55.866044   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:55.869029   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.869587   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:55.869621   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:55.869853   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:55.870090   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.870286   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:55.870446   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:55.870622   53296 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:55.870824   53296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:33:55.870839   53296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:33:56.108814   53296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:33:56.108855   53296 machine.go:96] duration metric: took 1.122810946s to provisionDockerMachine
	I0916 11:33:56.108866   53296 start.go:293] postStartSetup for "kubernetes-upgrade-045794" (driver="kvm2")
	I0916 11:33:56.108875   53296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:33:56.108908   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:33:56.109192   53296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:33:56.109221   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:56.112047   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.112379   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:56.112406   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.112525   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:56.112704   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:56.112856   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:56.113017   53296 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:33:56.199954   53296 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:33:56.204233   53296 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:33:56.204264   53296 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:33:56.204346   53296 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:33:56.204443   53296 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:33:56.204576   53296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:33:56.214009   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:33:56.239696   53296 start.go:296] duration metric: took 130.815109ms for postStartSetup
	I0916 11:33:56.239742   53296 fix.go:56] duration metric: took 19.421231389s for fixHost
	I0916 11:33:56.239767   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:56.242553   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.242867   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:56.242902   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.243028   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:56.243235   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:56.243419   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:56.243584   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:56.243767   53296 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:56.243966   53296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I0916 11:33:56.243977   53296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:33:56.350429   53296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726486436.316761204
	
	I0916 11:33:56.350504   53296 fix.go:216] guest clock: 1726486436.316761204
	I0916 11:33:56.350537   53296 fix.go:229] Guest: 2024-09-16 11:33:56.316761204 +0000 UTC Remote: 2024-09-16 11:33:56.239746496 +0000 UTC m=+48.751928744 (delta=77.014708ms)
	I0916 11:33:56.350581   53296 fix.go:200] guest clock delta is within tolerance: 77.014708ms
	I0916 11:33:56.350588   53296 start.go:83] releasing machines lock for "kubernetes-upgrade-045794", held for 19.532108352s
	I0916 11:33:56.350620   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:33:56.350881   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:33:56.354027   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.354503   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:56.354539   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.354719   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:33:56.355232   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:33:56.355405   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:33:56.355490   53296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:33:56.355547   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:56.355792   53296 ssh_runner.go:195] Run: cat /version.json
	I0916 11:33:56.355818   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:33:56.358817   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.359004   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.359192   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:56.359237   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.359328   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:56.359480   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:56.359611   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:56.359629   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:56.359672   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:56.359772   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:33:56.359918   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:33:56.360009   53296 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:33:56.360380   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:33:56.360523   53296 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:33:56.461497   53296 ssh_runner.go:195] Run: systemctl --version
	I0916 11:33:56.467573   53296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:33:56.627011   53296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 11:33:56.633035   53296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:33:56.633108   53296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:33:56.652476   53296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 11:33:56.652503   53296 start.go:495] detecting cgroup driver to use...
	I0916 11:33:56.652570   53296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:33:56.672263   53296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:33:56.688835   53296 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:33:56.688910   53296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:33:56.704081   53296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:33:56.718968   53296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:33:56.844526   53296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:33:57.005768   53296 docker.go:233] disabling docker service ...
	I0916 11:33:57.005832   53296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:33:57.020947   53296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:33:57.034629   53296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:33:57.205079   53296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:33:57.351790   53296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:33:57.367045   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:33:57.390089   53296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:33:57.390165   53296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.401322   53296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:33:57.401382   53296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.412365   53296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.423532   53296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.434431   53296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:33:57.445775   53296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.457296   53296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.474650   53296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:33:57.485611   53296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:33:57.495455   53296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 11:33:57.495514   53296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 11:33:57.509424   53296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:33:57.519260   53296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:33:57.633978   53296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:33:57.726669   53296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:33:57.726750   53296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:33:57.731372   53296 start.go:563] Will wait 60s for crictl version
	I0916 11:33:57.731436   53296 ssh_runner.go:195] Run: which crictl
	I0916 11:33:57.735197   53296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:33:57.775651   53296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:33:57.775735   53296 ssh_runner.go:195] Run: crio --version
	I0916 11:33:57.805396   53296 ssh_runner.go:195] Run: crio --version
	I0916 11:33:57.843653   53296 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:33:56.376536   53522 machine.go:93] provisionDockerMachine start ...
	I0916 11:33:56.376556   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:33:56.376733   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:33:56.379744   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.380292   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:56.380320   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.380453   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:33:56.380591   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.380730   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.380896   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:33:56.381046   53522 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:56.381322   53522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0916 11:33:56.381336   53522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:33:56.494409   53522 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-902210
	
	I0916 11:33:56.494439   53522 main.go:141] libmachine: (pause-902210) Calling .GetMachineName
	I0916 11:33:56.494708   53522 buildroot.go:166] provisioning hostname "pause-902210"
	I0916 11:33:56.494739   53522 main.go:141] libmachine: (pause-902210) Calling .GetMachineName
	I0916 11:33:56.494943   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:33:56.497923   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.498312   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:56.498340   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.498449   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:33:56.498599   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.498798   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.498952   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:33:56.499114   53522 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:56.499323   53522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0916 11:33:56.499336   53522 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-902210 && echo "pause-902210" | sudo tee /etc/hostname
	I0916 11:33:56.621720   53522 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-902210
	
	I0916 11:33:56.621761   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:33:56.624804   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.625160   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:56.625203   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.625383   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:33:56.625554   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.625715   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.625871   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:33:56.626073   53522 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:56.626289   53522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0916 11:33:56.626313   53522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-902210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-902210/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-902210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:33:56.734230   53522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:33:56.734262   53522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3851/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3851/.minikube}
	I0916 11:33:56.734284   53522 buildroot.go:174] setting up certificates
	I0916 11:33:56.734292   53522 provision.go:84] configureAuth start
	I0916 11:33:56.734300   53522 main.go:141] libmachine: (pause-902210) Calling .GetMachineName
	I0916 11:33:56.734580   53522 main.go:141] libmachine: (pause-902210) Calling .GetIP
	I0916 11:33:56.737451   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.737856   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:56.737885   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.738078   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:33:56.740757   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.741178   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:56.741218   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.741398   53522 provision.go:143] copyHostCerts
	I0916 11:33:56.741470   53522 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem, removing ...
	I0916 11:33:56.741484   53522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem
	I0916 11:33:56.741567   53522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/ca.pem (1082 bytes)
	I0916 11:33:56.741752   53522 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem, removing ...
	I0916 11:33:56.741767   53522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem
	I0916 11:33:56.741804   53522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/cert.pem (1123 bytes)
	I0916 11:33:56.741908   53522 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem, removing ...
	I0916 11:33:56.741921   53522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem
	I0916 11:33:56.741951   53522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3851/.minikube/key.pem (1679 bytes)
	I0916 11:33:56.742041   53522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem org=jenkins.pause-902210 san=[127.0.0.1 192.168.39.244 localhost minikube pause-902210]
	I0916 11:33:56.910294   53522 provision.go:177] copyRemoteCerts
	I0916 11:33:56.910358   53522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:33:56.910380   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:33:56.912992   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.913340   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:56.913370   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:56.913550   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:33:56.913757   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:56.913909   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:33:56.914058   53522 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/pause-902210/id_rsa Username:docker}
	I0916 11:33:56.999716   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 11:33:57.033158   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 11:33:57.062611   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:33:57.090183   53522 provision.go:87] duration metric: took 355.87744ms to configureAuth
	I0916 11:33:57.090216   53522 buildroot.go:189] setting minikube options for container-runtime
	I0916 11:33:57.090476   53522 config.go:182] Loaded profile config "pause-902210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:33:57.090581   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:33:57.093468   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:57.093854   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:33:57.093882   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:33:57.094074   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:33:57.094274   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:57.094414   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:33:57.094525   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:33:57.094665   53522 main.go:141] libmachine: Using SSH client type: native
	I0916 11:33:57.094886   53522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0916 11:33:57.094915   53522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 11:33:56.799332   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:57.298918   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:57.799505   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:58.299552   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:58.798941   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:59.299652   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:59.799911   53156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:33:59.952192   53156 kubeadm.go:1113] duration metric: took 4.305910378s to wait for elevateKubeSystemPrivileges
	I0916 11:33:59.952236   53156 kubeadm.go:394] duration metric: took 15.560823452s to StartCluster
	I0916 11:33:59.952258   53156 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:33:59.952346   53156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:33:59.953643   53156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:33:59.953990   53156 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.144 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:33:59.954170   53156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:33:59.954250   53156 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:33:59.954347   53156 addons.go:69] Setting storage-provisioner=true in profile "auto-957670"
	I0916 11:33:59.954366   53156 addons.go:234] Setting addon storage-provisioner=true in "auto-957670"
	I0916 11:33:59.954366   53156 addons.go:69] Setting default-storageclass=true in profile "auto-957670"
	I0916 11:33:59.954377   53156 config.go:182] Loaded profile config "auto-957670": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:33:59.954392   53156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-957670"
	I0916 11:33:59.954397   53156 host.go:66] Checking if "auto-957670" exists ...
	I0916 11:33:59.954872   53156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:33:59.954884   53156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:33:59.954906   53156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:33:59.954916   53156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:33:59.956005   53156 out.go:177] * Verifying Kubernetes components...
	I0916 11:33:59.957506   53156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:33:59.974949   53156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0916 11:33:59.975581   53156 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:33:59.975681   53156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0916 11:33:59.976113   53156 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:33:59.976393   53156 main.go:141] libmachine: Using API Version  1
	I0916 11:33:59.976414   53156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:33:59.976811   53156 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:33:59.976990   53156 main.go:141] libmachine: Using API Version  1
	I0916 11:33:59.977010   53156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:33:59.977028   53156 main.go:141] libmachine: (auto-957670) Calling .GetState
	I0916 11:33:59.977621   53156 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:33:59.980776   53156 addons.go:234] Setting addon default-storageclass=true in "auto-957670"
	I0916 11:33:59.980824   53156 host.go:66] Checking if "auto-957670" exists ...
	I0916 11:33:59.981224   53156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:33:59.981255   53156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:33:59.981874   53156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:33:59.981900   53156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:00.000081   53156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0916 11:34:00.000682   53156 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:00.001311   53156 main.go:141] libmachine: Using API Version  1
	I0916 11:34:00.001336   53156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:00.001723   53156 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:00.002333   53156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:00.002383   53156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:00.006085   53156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0916 11:34:00.006614   53156 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:00.007276   53156 main.go:141] libmachine: Using API Version  1
	I0916 11:34:00.007298   53156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:00.007660   53156 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:00.007870   53156 main.go:141] libmachine: (auto-957670) Calling .GetState
	I0916 11:34:00.010025   53156 main.go:141] libmachine: (auto-957670) Calling .DriverName
	I0916 11:34:00.015406   53156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:34:00.016950   53156 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:34:00.016974   53156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:34:00.016999   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHHostname
	I0916 11:34:00.020452   53156 main.go:141] libmachine: (auto-957670) DBG | domain auto-957670 has defined MAC address 52:54:00:15:f6:29 in network mk-auto-957670
	I0916 11:34:00.020912   53156 main.go:141] libmachine: (auto-957670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f6:29", ip: ""} in network mk-auto-957670: {Iface:virbr1 ExpiryTime:2024-09-16 12:33:25 +0000 UTC Type:0 Mac:52:54:00:15:f6:29 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:auto-957670 Clientid:01:52:54:00:15:f6:29}
	I0916 11:34:00.020942   53156 main.go:141] libmachine: (auto-957670) DBG | domain auto-957670 has defined IP address 192.168.61.144 and MAC address 52:54:00:15:f6:29 in network mk-auto-957670
	I0916 11:34:00.021217   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHPort
	I0916 11:34:00.021445   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHKeyPath
	I0916 11:34:00.021596   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHUsername
	I0916 11:34:00.021737   53156 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/auto-957670/id_rsa Username:docker}
	I0916 11:34:00.023594   53156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0916 11:34:00.023950   53156 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:00.024547   53156 main.go:141] libmachine: Using API Version  1
	I0916 11:34:00.024569   53156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:00.024870   53156 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:00.025064   53156 main.go:141] libmachine: (auto-957670) Calling .GetState
	I0916 11:34:00.026656   53156 main.go:141] libmachine: (auto-957670) Calling .DriverName
	I0916 11:34:00.026847   53156 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:34:00.026864   53156 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:34:00.026881   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHHostname
	I0916 11:34:00.030338   53156 main.go:141] libmachine: (auto-957670) DBG | domain auto-957670 has defined MAC address 52:54:00:15:f6:29 in network mk-auto-957670
	I0916 11:34:00.030790   53156 main.go:141] libmachine: (auto-957670) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:f6:29", ip: ""} in network mk-auto-957670: {Iface:virbr1 ExpiryTime:2024-09-16 12:33:25 +0000 UTC Type:0 Mac:52:54:00:15:f6:29 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:auto-957670 Clientid:01:52:54:00:15:f6:29}
	I0916 11:34:00.030809   53156 main.go:141] libmachine: (auto-957670) DBG | domain auto-957670 has defined IP address 192.168.61.144 and MAC address 52:54:00:15:f6:29 in network mk-auto-957670
	I0916 11:34:00.030984   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHPort
	I0916 11:34:00.031166   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHKeyPath
	I0916 11:34:00.031314   53156 main.go:141] libmachine: (auto-957670) Calling .GetSSHUsername
	I0916 11:34:00.031449   53156 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/auto-957670/id_rsa Username:docker}
	I0916 11:34:00.137230   53156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:34:00.191435   53156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:34:00.381163   53156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:34:00.431699   53156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:34:00.787190   53156 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0916 11:34:00.788439   53156 node_ready.go:35] waiting up to 15m0s for node "auto-957670" to be "Ready" ...
	I0916 11:34:00.800104   53156 node_ready.go:49] node "auto-957670" has status "Ready":"True"
	I0916 11:34:00.800140   53156 node_ready.go:38] duration metric: took 11.674087ms for node "auto-957670" to be "Ready" ...
	I0916 11:34:00.800155   53156 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:34:00.812070   53156 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace to be "Ready" ...
	I0916 11:34:01.294447   53156 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-957670" context rescaled to 1 replicas
	I0916 11:34:01.297458   53156 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:01.297486   53156 main.go:141] libmachine: (auto-957670) Calling .Close
	I0916 11:34:01.297504   53156 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:01.297527   53156 main.go:141] libmachine: (auto-957670) Calling .Close
	I0916 11:34:01.297790   53156 main.go:141] libmachine: (auto-957670) DBG | Closing plugin on server side
	I0916 11:34:01.297849   53156 main.go:141] libmachine: (auto-957670) DBG | Closing plugin on server side
	I0916 11:34:01.297873   53156 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:01.297881   53156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:01.297902   53156 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:01.297917   53156 main.go:141] libmachine: (auto-957670) Calling .Close
	I0916 11:34:01.297883   53156 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:01.297987   53156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:01.298010   53156 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:01.298020   53156 main.go:141] libmachine: (auto-957670) Calling .Close
	I0916 11:34:01.298212   53156 main.go:141] libmachine: (auto-957670) DBG | Closing plugin on server side
	I0916 11:34:01.298240   53156 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:01.298247   53156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:01.298284   53156 main.go:141] libmachine: (auto-957670) DBG | Closing plugin on server side
	I0916 11:34:01.298319   53156 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:01.298330   53156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:01.335083   53156 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:01.335118   53156 main.go:141] libmachine: (auto-957670) Calling .Close
	I0916 11:34:01.335394   53156 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:01.335413   53156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:01.337289   53156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:34:01.338564   53156 addons.go:510] duration metric: took 1.384309738s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:33:57.844950   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetIP
	I0916 11:33:57.847931   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:57.848347   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:33:57.848381   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:33:57.848564   53296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0916 11:33:57.853071   53296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:33:57.865890   53296 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-045794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:33:57.865998   53296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:33:57.866038   53296 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:33:57.909068   53296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:33:57.909143   53296 ssh_runner.go:195] Run: which lz4
	I0916 11:33:57.912963   53296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:33:57.917221   53296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:33:57.917254   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 11:33:59.330233   53296 crio.go:462] duration metric: took 1.41729593s to copy over tarball
	I0916 11:33:59.330323   53296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:34:01.500797   53296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.170440919s)
	I0916 11:34:01.500831   53296 crio.go:469] duration metric: took 2.170566624s to extract the tarball
	I0916 11:34:01.500841   53296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:34:01.540565   53296 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:34:01.592049   53296 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:34:01.592077   53296 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:34:01.592087   53296 kubeadm.go:934] updating node { 192.168.72.174 8443 v1.31.1 crio true true} ...
	I0916 11:34:01.592197   53296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-045794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:34:01.592284   53296 ssh_runner.go:195] Run: crio config
	I0916 11:34:01.637728   53296 cni.go:84] Creating CNI manager for ""
	I0916 11:34:01.637753   53296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:34:01.637763   53296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:34:01.637783   53296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.174 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-045794 NodeName:kubernetes-upgrade-045794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:34:01.637911   53296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-045794"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:34:01.637977   53296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:34:01.648537   53296 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:34:01.648607   53296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:34:01.658450   53296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0916 11:34:01.677908   53296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:34:01.699388   53296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0916 11:34:01.720509   53296 ssh_runner.go:195] Run: grep 192.168.72.174	control-plane.minikube.internal$ /etc/hosts
	I0916 11:34:01.726219   53296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:34:01.743459   53296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:34:01.872979   53296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:34:01.889874   53296 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794 for IP: 192.168.72.174
	I0916 11:34:01.889900   53296 certs.go:194] generating shared ca certs ...
	I0916 11:34:01.889921   53296 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:34:01.890091   53296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:34:01.890150   53296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:34:01.890164   53296 certs.go:256] generating profile certs ...
	I0916 11:34:01.890284   53296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.key
	I0916 11:34:01.890349   53296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key.435d8532
	I0916 11:34:01.890397   53296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.key
	I0916 11:34:01.890553   53296 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:34:01.890600   53296 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:34:01.890618   53296 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:34:01.890652   53296 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:34:01.890688   53296 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:34:01.890722   53296 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:34:01.890776   53296 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:34:01.891380   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:34:01.934080   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:34:01.965017   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:34:01.997957   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:34:02.047804   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 11:34:02.084858   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:34:02.117515   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:34:02.143844   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:34:02.171917   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:34:02.196921   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:34:02.225859   53296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:34:02.255022   53296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:34:02.275326   53296 ssh_runner.go:195] Run: openssl version
	I0916 11:34:02.281605   53296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:34:02.293715   53296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:34:02.298822   53296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:34:02.298898   53296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:34:02.305530   53296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:34:02.317745   53296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:34:02.333475   53296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:34:02.338815   53296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:34:02.338871   53296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:34:02.345610   53296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:34:02.357047   53296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:34:02.369923   53296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:34:02.374999   53296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:34:02.375080   53296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:34:02.381076   53296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:34:02.393855   53296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:34:02.398913   53296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:34:02.405334   53296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:34:02.412202   53296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:34:02.418743   53296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:34:02.424939   53296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:34:02.431798   53296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:34:02.437919   53296 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-045794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-045794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:34:02.438016   53296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:34:02.438088   53296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:34:02.483570   53296 cri.go:89] found id: ""
	I0916 11:34:02.483633   53296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:34:02.496087   53296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:34:02.496107   53296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:34:02.496148   53296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:34:02.510243   53296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:34:02.510969   53296 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-045794" does not appear in /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:34:02.511350   53296 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3851/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-045794" cluster setting kubeconfig missing "kubernetes-upgrade-045794" context setting]
	I0916 11:34:02.512061   53296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:34:02.513249   53296 kapi.go:59] client config for kubernetes-upgrade-045794: &rest.Config{Host:"https://192.168.72.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:34:02.513967   53296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:34:02.524583   53296 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.72.174
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-045794"
	   kubeletExtraArgs:
	     node-ip: 192.168.72.174
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.72.174"]
	@@ -33,14 +33,12 @@
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.20.0
	+kubernetesVersion: v1.31.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -52,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0916 11:34:02.524604   53296 kubeadm.go:1160] stopping kube-system containers ...
	I0916 11:34:02.524618   53296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0916 11:34:02.524673   53296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:34:02.891168   53823 start.go:364] duration metric: took 15.169349636s to acquireMachinesLock for "kindnet-957670"
	I0916 11:34:02.891226   53823 start.go:93] Provisioning new machine with config: &{Name:kindnet-957670 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:kindnet-957670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:34:02.891366   53823 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 11:34:02.640731   53522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 11:34:02.640753   53522 machine.go:96] duration metric: took 6.264205085s to provisionDockerMachine
	I0916 11:34:02.640767   53522 start.go:293] postStartSetup for "pause-902210" (driver="kvm2")
	I0916 11:34:02.640779   53522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:34:02.640799   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:34:02.641105   53522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:34:02.641153   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:34:02.644280   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.644621   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:34:02.644649   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.644803   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:34:02.644975   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:34:02.645143   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:34:02.645277   53522 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/pause-902210/id_rsa Username:docker}
	I0916 11:34:02.733974   53522 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:34:02.738675   53522 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 11:34:02.738706   53522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/addons for local assets ...
	I0916 11:34:02.738798   53522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3851/.minikube/files for local assets ...
	I0916 11:34:02.738909   53522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem -> 112032.pem in /etc/ssl/certs
	I0916 11:34:02.739050   53522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:34:02.750023   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:34:02.776929   53522 start.go:296] duration metric: took 136.148413ms for postStartSetup
	I0916 11:34:02.776972   53522 fix.go:56] duration metric: took 6.426235801s for fixHost
	I0916 11:34:02.776996   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:34:02.780380   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.780722   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:34:02.780748   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.780953   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:34:02.781154   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:34:02.781317   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:34:02.781491   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:34:02.781703   53522 main.go:141] libmachine: Using SSH client type: native
	I0916 11:34:02.781903   53522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0916 11:34:02.781918   53522 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 11:34:02.891019   53522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726486442.882406317
	
	I0916 11:34:02.891040   53522 fix.go:216] guest clock: 1726486442.882406317
	I0916 11:34:02.891047   53522 fix.go:229] Guest: 2024-09-16 11:34:02.882406317 +0000 UTC Remote: 2024-09-16 11:34:02.776977227 +0000 UTC m=+29.772476484 (delta=105.42909ms)
	I0916 11:34:02.891065   53522 fix.go:200] guest clock delta is within tolerance: 105.42909ms
	I0916 11:34:02.891070   53522 start.go:83] releasing machines lock for "pause-902210", held for 6.540367895s
	I0916 11:34:02.891095   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:34:02.891388   53522 main.go:141] libmachine: (pause-902210) Calling .GetIP
	I0916 11:34:02.894227   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.894612   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:34:02.894638   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.894786   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:34:02.895303   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:34:02.895469   53522 main.go:141] libmachine: (pause-902210) Calling .DriverName
	I0916 11:34:02.895562   53522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:34:02.895607   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:34:02.895691   53522 ssh_runner.go:195] Run: cat /version.json
	I0916 11:34:02.895717   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHHostname
	I0916 11:34:02.898337   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.898491   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.898744   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:34:02.898778   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.898889   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:34:02.899000   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:34:02.899023   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:02.899076   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:34:02.899496   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHPort
	I0916 11:34:02.901012   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:34:02.901036   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHKeyPath
	I0916 11:34:02.901190   53522 main.go:141] libmachine: (pause-902210) Calling .GetSSHUsername
	I0916 11:34:02.901253   53522 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/pause-902210/id_rsa Username:docker}
	I0916 11:34:02.901334   53522 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/pause-902210/id_rsa Username:docker}
	I0916 11:34:03.004685   53522 ssh_runner.go:195] Run: systemctl --version
	I0916 11:34:03.012037   53522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 11:34:02.819534   53156 pod_ready.go:103] pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:34:05.319745   53156 pod_ready.go:103] pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:34:02.564586   53296 cri.go:89] found id: ""
	I0916 11:34:02.564664   53296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 11:34:02.585702   53296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:34:02.598292   53296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:34:02.598315   53296 kubeadm.go:157] found existing configuration files:
	
	I0916 11:34:02.598363   53296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:34:02.608239   53296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:34:02.608300   53296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:34:02.618774   53296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:34:02.631942   53296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:34:02.632017   53296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:34:02.642640   53296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:34:02.653490   53296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:34:02.653562   53296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:34:02.664183   53296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:34:02.673881   53296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:34:02.673955   53296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:34:02.683846   53296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:34:02.694317   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:02.816254   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:04.091042   53296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.274748221s)
	I0916 11:34:04.091082   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:04.313339   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:04.389776   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:04.499164   53296 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:34:04.499269   53296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:04.999409   53296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:05.500315   53296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:06.000117   53296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:06.017184   53296 api_server.go:72] duration metric: took 1.518019032s to wait for apiserver process to appear ...
	I0916 11:34:06.017210   53296 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:34:06.017233   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:02.949662   53823 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 11:34:02.949943   53823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:02.949999   53823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:02.966613   53823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0916 11:34:02.967101   53823 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:02.967649   53823 main.go:141] libmachine: Using API Version  1
	I0916 11:34:02.967671   53823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:02.968010   53823 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:02.968167   53823 main.go:141] libmachine: (kindnet-957670) Calling .GetMachineName
	I0916 11:34:02.968317   53823 main.go:141] libmachine: (kindnet-957670) Calling .DriverName
	I0916 11:34:02.968466   53823 start.go:159] libmachine.API.Create for "kindnet-957670" (driver="kvm2")
	I0916 11:34:02.968491   53823 client.go:168] LocalClient.Create starting
	I0916 11:34:02.968518   53823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem
	I0916 11:34:02.968559   53823 main.go:141] libmachine: Decoding PEM data...
	I0916 11:34:02.968573   53823 main.go:141] libmachine: Parsing certificate...
	I0916 11:34:02.968618   53823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem
	I0916 11:34:02.968636   53823 main.go:141] libmachine: Decoding PEM data...
	I0916 11:34:02.968653   53823 main.go:141] libmachine: Parsing certificate...
	I0916 11:34:02.968670   53823 main.go:141] libmachine: Running pre-create checks...
	I0916 11:34:02.968678   53823 main.go:141] libmachine: (kindnet-957670) Calling .PreCreateCheck
	I0916 11:34:02.969036   53823 main.go:141] libmachine: (kindnet-957670) Calling .GetConfigRaw
	I0916 11:34:02.969409   53823 main.go:141] libmachine: Creating machine...
	I0916 11:34:02.969423   53823 main.go:141] libmachine: (kindnet-957670) Calling .Create
	I0916 11:34:02.969561   53823 main.go:141] libmachine: (kindnet-957670) Creating KVM machine...
	I0916 11:34:02.970763   53823 main.go:141] libmachine: (kindnet-957670) DBG | found existing default KVM network
	I0916 11:34:02.972259   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:02.972066   53953 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:29:dd:19} reservation:<nil>}
	I0916 11:34:02.973654   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:02.973561   53953 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d4030}
	I0916 11:34:02.973693   53823 main.go:141] libmachine: (kindnet-957670) DBG | created network xml: 
	I0916 11:34:02.973709   53823 main.go:141] libmachine: (kindnet-957670) DBG | <network>
	I0916 11:34:02.973721   53823 main.go:141] libmachine: (kindnet-957670) DBG |   <name>mk-kindnet-957670</name>
	I0916 11:34:02.973727   53823 main.go:141] libmachine: (kindnet-957670) DBG |   <dns enable='no'/>
	I0916 11:34:02.973738   53823 main.go:141] libmachine: (kindnet-957670) DBG |   
	I0916 11:34:02.973754   53823 main.go:141] libmachine: (kindnet-957670) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0916 11:34:02.973777   53823 main.go:141] libmachine: (kindnet-957670) DBG |     <dhcp>
	I0916 11:34:02.973787   53823 main.go:141] libmachine: (kindnet-957670) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0916 11:34:02.973798   53823 main.go:141] libmachine: (kindnet-957670) DBG |     </dhcp>
	I0916 11:34:02.973804   53823 main.go:141] libmachine: (kindnet-957670) DBG |   </ip>
	I0916 11:34:02.973814   53823 main.go:141] libmachine: (kindnet-957670) DBG |   
	I0916 11:34:02.973820   53823 main.go:141] libmachine: (kindnet-957670) DBG | </network>
	I0916 11:34:02.973835   53823 main.go:141] libmachine: (kindnet-957670) DBG | 
	I0916 11:34:03.115259   53823 main.go:141] libmachine: (kindnet-957670) DBG | trying to create private KVM network mk-kindnet-957670 192.168.50.0/24...
	I0916 11:34:03.196523   53823 main.go:141] libmachine: (kindnet-957670) DBG | private KVM network mk-kindnet-957670 192.168.50.0/24 created
	I0916 11:34:03.196575   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:03.196503   53953 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:34:03.196596   53823 main.go:141] libmachine: (kindnet-957670) Setting up store path in /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670 ...
	I0916 11:34:03.196619   53823 main.go:141] libmachine: (kindnet-957670) Building disk image from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 11:34:03.196654   53823 main.go:141] libmachine: (kindnet-957670) Downloading /home/jenkins/minikube-integration/19651-3851/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 11:34:03.464485   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:03.464345   53953 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670/id_rsa...
	I0916 11:34:03.613630   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:03.613496   53953 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670/kindnet-957670.rawdisk...
	I0916 11:34:03.613673   53823 main.go:141] libmachine: (kindnet-957670) DBG | Writing magic tar header
	I0916 11:34:03.613693   53823 main.go:141] libmachine: (kindnet-957670) DBG | Writing SSH key tar header
	I0916 11:34:03.613706   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:03.613643   53953 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670 ...
	I0916 11:34:03.613826   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670
	I0916 11:34:03.613860   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube/machines
	I0916 11:34:03.613876   53823 main.go:141] libmachine: (kindnet-957670) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670 (perms=drwx------)
	I0916 11:34:03.613891   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 11:34:03.613907   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3851
	I0916 11:34:03.613918   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 11:34:03.613929   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home/jenkins
	I0916 11:34:03.613939   53823 main.go:141] libmachine: (kindnet-957670) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube/machines (perms=drwxr-xr-x)
	I0916 11:34:03.613953   53823 main.go:141] libmachine: (kindnet-957670) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851/.minikube (perms=drwxr-xr-x)
	I0916 11:34:03.613961   53823 main.go:141] libmachine: (kindnet-957670) DBG | Checking permissions on dir: /home
	I0916 11:34:03.613979   53823 main.go:141] libmachine: (kindnet-957670) DBG | Skipping /home - not owner
	I0916 11:34:03.613996   53823 main.go:141] libmachine: (kindnet-957670) Setting executable bit set on /home/jenkins/minikube-integration/19651-3851 (perms=drwxrwxr-x)
	I0916 11:34:03.614009   53823 main.go:141] libmachine: (kindnet-957670) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 11:34:03.614023   53823 main.go:141] libmachine: (kindnet-957670) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 11:34:03.614033   53823 main.go:141] libmachine: (kindnet-957670) Creating domain...
	I0916 11:34:03.615257   53823 main.go:141] libmachine: (kindnet-957670) define libvirt domain using xml: 
	I0916 11:34:03.615281   53823 main.go:141] libmachine: (kindnet-957670) <domain type='kvm'>
	I0916 11:34:03.615288   53823 main.go:141] libmachine: (kindnet-957670)   <name>kindnet-957670</name>
	I0916 11:34:03.615295   53823 main.go:141] libmachine: (kindnet-957670)   <memory unit='MiB'>3072</memory>
	I0916 11:34:03.615307   53823 main.go:141] libmachine: (kindnet-957670)   <vcpu>2</vcpu>
	I0916 11:34:03.615320   53823 main.go:141] libmachine: (kindnet-957670)   <features>
	I0916 11:34:03.615328   53823 main.go:141] libmachine: (kindnet-957670)     <acpi/>
	I0916 11:34:03.615338   53823 main.go:141] libmachine: (kindnet-957670)     <apic/>
	I0916 11:34:03.615347   53823 main.go:141] libmachine: (kindnet-957670)     <pae/>
	I0916 11:34:03.615355   53823 main.go:141] libmachine: (kindnet-957670)     
	I0916 11:34:03.615363   53823 main.go:141] libmachine: (kindnet-957670)   </features>
	I0916 11:34:03.615386   53823 main.go:141] libmachine: (kindnet-957670)   <cpu mode='host-passthrough'>
	I0916 11:34:03.615398   53823 main.go:141] libmachine: (kindnet-957670)   
	I0916 11:34:03.615406   53823 main.go:141] libmachine: (kindnet-957670)   </cpu>
	I0916 11:34:03.615412   53823 main.go:141] libmachine: (kindnet-957670)   <os>
	I0916 11:34:03.615422   53823 main.go:141] libmachine: (kindnet-957670)     <type>hvm</type>
	I0916 11:34:03.615431   53823 main.go:141] libmachine: (kindnet-957670)     <boot dev='cdrom'/>
	I0916 11:34:03.615440   53823 main.go:141] libmachine: (kindnet-957670)     <boot dev='hd'/>
	I0916 11:34:03.615448   53823 main.go:141] libmachine: (kindnet-957670)     <bootmenu enable='no'/>
	I0916 11:34:03.615457   53823 main.go:141] libmachine: (kindnet-957670)   </os>
	I0916 11:34:03.615471   53823 main.go:141] libmachine: (kindnet-957670)   <devices>
	I0916 11:34:03.615486   53823 main.go:141] libmachine: (kindnet-957670)     <disk type='file' device='cdrom'>
	I0916 11:34:03.615502   53823 main.go:141] libmachine: (kindnet-957670)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670/boot2docker.iso'/>
	I0916 11:34:03.615525   53823 main.go:141] libmachine: (kindnet-957670)       <target dev='hdc' bus='scsi'/>
	I0916 11:34:03.615541   53823 main.go:141] libmachine: (kindnet-957670)       <readonly/>
	I0916 11:34:03.615550   53823 main.go:141] libmachine: (kindnet-957670)     </disk>
	I0916 11:34:03.615559   53823 main.go:141] libmachine: (kindnet-957670)     <disk type='file' device='disk'>
	I0916 11:34:03.615568   53823 main.go:141] libmachine: (kindnet-957670)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 11:34:03.615581   53823 main.go:141] libmachine: (kindnet-957670)       <source file='/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kindnet-957670/kindnet-957670.rawdisk'/>
	I0916 11:34:03.615598   53823 main.go:141] libmachine: (kindnet-957670)       <target dev='hda' bus='virtio'/>
	I0916 11:34:03.615618   53823 main.go:141] libmachine: (kindnet-957670)     </disk>
	I0916 11:34:03.615635   53823 main.go:141] libmachine: (kindnet-957670)     <interface type='network'>
	I0916 11:34:03.615647   53823 main.go:141] libmachine: (kindnet-957670)       <source network='mk-kindnet-957670'/>
	I0916 11:34:03.615656   53823 main.go:141] libmachine: (kindnet-957670)       <model type='virtio'/>
	I0916 11:34:03.615667   53823 main.go:141] libmachine: (kindnet-957670)     </interface>
	I0916 11:34:03.615673   53823 main.go:141] libmachine: (kindnet-957670)     <interface type='network'>
	I0916 11:34:03.615681   53823 main.go:141] libmachine: (kindnet-957670)       <source network='default'/>
	I0916 11:34:03.615691   53823 main.go:141] libmachine: (kindnet-957670)       <model type='virtio'/>
	I0916 11:34:03.615707   53823 main.go:141] libmachine: (kindnet-957670)     </interface>
	I0916 11:34:03.615721   53823 main.go:141] libmachine: (kindnet-957670)     <serial type='pty'>
	I0916 11:34:03.615730   53823 main.go:141] libmachine: (kindnet-957670)       <target port='0'/>
	I0916 11:34:03.615740   53823 main.go:141] libmachine: (kindnet-957670)     </serial>
	I0916 11:34:03.615748   53823 main.go:141] libmachine: (kindnet-957670)     <console type='pty'>
	I0916 11:34:03.615758   53823 main.go:141] libmachine: (kindnet-957670)       <target type='serial' port='0'/>
	I0916 11:34:03.615766   53823 main.go:141] libmachine: (kindnet-957670)     </console>
	I0916 11:34:03.615776   53823 main.go:141] libmachine: (kindnet-957670)     <rng model='virtio'>
	I0916 11:34:03.615786   53823 main.go:141] libmachine: (kindnet-957670)       <backend model='random'>/dev/random</backend>
	I0916 11:34:03.615794   53823 main.go:141] libmachine: (kindnet-957670)     </rng>
	I0916 11:34:03.615802   53823 main.go:141] libmachine: (kindnet-957670)     
	I0916 11:34:03.615810   53823 main.go:141] libmachine: (kindnet-957670)     
	I0916 11:34:03.615818   53823 main.go:141] libmachine: (kindnet-957670)   </devices>
	I0916 11:34:03.615826   53823 main.go:141] libmachine: (kindnet-957670) </domain>
	I0916 11:34:03.615844   53823 main.go:141] libmachine: (kindnet-957670) 
	I0916 11:34:03.767070   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:17:29:c7 in network default
	I0916 11:34:03.767914   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:03.767962   53823 main.go:141] libmachine: (kindnet-957670) Ensuring networks are active...
	I0916 11:34:03.768910   53823 main.go:141] libmachine: (kindnet-957670) Ensuring network default is active
	I0916 11:34:03.769298   53823 main.go:141] libmachine: (kindnet-957670) Ensuring network mk-kindnet-957670 is active
	I0916 11:34:03.769941   53823 main.go:141] libmachine: (kindnet-957670) Getting domain xml...
	I0916 11:34:03.770893   53823 main.go:141] libmachine: (kindnet-957670) Creating domain...
	I0916 11:34:05.510811   53823 main.go:141] libmachine: (kindnet-957670) Waiting to get IP...
	I0916 11:34:05.511659   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:05.512196   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:05.512239   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:05.512190   53953 retry.go:31] will retry after 310.312479ms: waiting for machine to come up
	I0916 11:34:05.823827   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:05.824442   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:05.824473   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:05.824390   53953 retry.go:31] will retry after 277.151333ms: waiting for machine to come up
	I0916 11:34:06.103003   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:06.103518   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:06.103556   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:06.103478   53953 retry.go:31] will retry after 345.937141ms: waiting for machine to come up
	I0916 11:34:06.451127   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:06.451615   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:06.451644   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:06.451563   53953 retry.go:31] will retry after 518.956782ms: waiting for machine to come up
	I0916 11:34:06.972312   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:06.972814   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:06.972838   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:06.972756   53953 retry.go:31] will retry after 507.712377ms: waiting for machine to come up
	I0916 11:34:07.482541   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:07.483033   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:07.483060   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:07.482979   53953 retry.go:31] will retry after 853.21235ms: waiting for machine to come up
	I0916 11:34:03.170668   53522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 11:34:03.177437   53522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 11:34:03.177508   53522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:34:03.191592   53522 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:34:03.191620   53522 start.go:495] detecting cgroup driver to use...
	I0916 11:34:03.191705   53522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 11:34:03.217514   53522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 11:34:03.235072   53522 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:34:03.235138   53522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:34:03.253911   53522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:34:03.271306   53522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:34:03.445287   53522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:34:03.579313   53522 docker.go:233] disabling docker service ...
	I0916 11:34:03.579392   53522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:34:03.599328   53522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:34:03.618632   53522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:34:03.758406   53522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:34:03.920710   53522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:34:03.936256   53522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:34:03.960463   53522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 11:34:03.960531   53522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:03.973218   53522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 11:34:03.973290   53522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:03.985590   53522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:03.998885   53522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:04.010980   53522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:34:04.023875   53522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:04.034809   53522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:04.047102   53522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 11:34:04.058270   53522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:34:04.068845   53522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:34:04.079332   53522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:34:04.220178   53522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 11:34:09.163291   53522 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.943074281s)
	I0916 11:34:09.163320   53522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 11:34:09.163375   53522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 11:34:09.170764   53522 start.go:563] Will wait 60s for crictl version
	I0916 11:34:09.170833   53522 ssh_runner.go:195] Run: which crictl
	I0916 11:34:09.176101   53522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:34:09.228673   53522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 11:34:09.228761   53522 ssh_runner.go:195] Run: crio --version
	I0916 11:34:09.271724   53522 ssh_runner.go:195] Run: crio --version
	I0916 11:34:09.309531   53522 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 11:34:08.053711   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 11:34:08.053741   53296 api_server.go:103] status: https://192.168.72.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 11:34:08.053756   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:08.097605   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 11:34:08.097639   53296 api_server.go:103] status: https://192.168.72.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 11:34:08.517953   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:08.522506   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 11:34:08.522540   53296 api_server.go:103] status: https://192.168.72.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 11:34:09.018142   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:09.028515   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 11:34:09.028546   53296 api_server.go:103] status: https://192.168.72.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 11:34:09.518124   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:09.525042   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 11:34:09.525074   53296 api_server.go:103] status: https://192.168.72.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 11:34:10.018281   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:10.026215   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 200:
	ok
	I0916 11:34:10.032663   53296 api_server.go:141] control plane version: v1.31.1
	I0916 11:34:10.032696   53296 api_server.go:131] duration metric: took 4.015479338s to wait for apiserver health ...
	I0916 11:34:10.032707   53296 cni.go:84] Creating CNI manager for ""
	I0916 11:34:10.032715   53296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:34:10.034582   53296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 11:34:07.818877   53156 pod_ready.go:103] pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:34:09.820564   53156 pod_ready.go:103] pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:34:10.035984   53296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 11:34:10.048311   53296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 11:34:10.084030   53296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:34:10.084127   53296 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 11:34:10.084151   53296 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 11:34:10.094522   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:10.094581   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:10.094601   53296 retry.go:31] will retry after 292.877992ms: only 1 pod(s) have shown up
	I0916 11:34:10.391670   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:10.391699   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:10.391714   53296 retry.go:31] will retry after 376.350811ms: only 1 pod(s) have shown up
	I0916 11:34:10.772640   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:10.772678   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:10.772694   53296 retry.go:31] will retry after 326.160304ms: only 1 pod(s) have shown up
	I0916 11:34:11.102833   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:11.102868   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:11.102883   53296 retry.go:31] will retry after 582.012337ms: only 1 pod(s) have shown up
	I0916 11:34:11.690116   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:11.690153   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:11.690183   53296 retry.go:31] will retry after 582.656629ms: only 1 pod(s) have shown up
	I0916 11:34:12.277161   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:12.277195   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:12.277212   53296 retry.go:31] will retry after 739.94128ms: only 1 pod(s) have shown up
	I0916 11:34:08.338245   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:08.338868   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:08.338915   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:08.338744   53953 retry.go:31] will retry after 1.095311266s: waiting for machine to come up
	I0916 11:34:09.435833   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:09.436379   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:09.436399   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:09.436334   53953 retry.go:31] will retry after 1.108383264s: waiting for machine to come up
	I0916 11:34:10.546591   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:10.547080   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:10.547109   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:10.547018   53953 retry.go:31] will retry after 1.634753204s: waiting for machine to come up
	I0916 11:34:12.183178   53823 main.go:141] libmachine: (kindnet-957670) DBG | domain kindnet-957670 has defined MAC address 52:54:00:50:76:35 in network mk-kindnet-957670
	I0916 11:34:12.183646   53823 main.go:141] libmachine: (kindnet-957670) DBG | unable to find current IP address of domain kindnet-957670 in network mk-kindnet-957670
	I0916 11:34:12.183668   53823 main.go:141] libmachine: (kindnet-957670) DBG | I0916 11:34:12.183597   53953 retry.go:31] will retry after 2.080401857s: waiting for machine to come up
	I0916 11:34:09.310881   53522 main.go:141] libmachine: (pause-902210) Calling .GetIP
	I0916 11:34:09.314375   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:09.314867   53522 main.go:141] libmachine: (pause-902210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:e0:34", ip: ""} in network mk-pause-902210: {Iface:virbr3 ExpiryTime:2024-09-16 12:32:50 +0000 UTC Type:0 Mac:52:54:00:a8:e0:34 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:pause-902210 Clientid:01:52:54:00:a8:e0:34}
	I0916 11:34:09.314888   53522 main.go:141] libmachine: (pause-902210) DBG | domain pause-902210 has defined IP address 192.168.39.244 and MAC address 52:54:00:a8:e0:34 in network mk-pause-902210
	I0916 11:34:09.315118   53522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 11:34:09.321180   53522 kubeadm.go:883] updating cluster {Name:pause-902210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-902210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:34:09.321359   53522 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 11:34:09.321419   53522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:34:09.376325   53522 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:34:09.376347   53522 crio.go:433] Images already preloaded, skipping extraction
	I0916 11:34:09.376401   53522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:34:09.413037   53522 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 11:34:09.413066   53522 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:34:09.413075   53522 kubeadm.go:934] updating node { 192.168.39.244 8443 v1.31.1 crio true true} ...
	I0916 11:34:09.413234   53522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-902210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-902210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:34:09.413328   53522 ssh_runner.go:195] Run: crio config
	I0916 11:34:09.488704   53522 cni.go:84] Creating CNI manager for ""
	I0916 11:34:09.488726   53522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 11:34:09.488738   53522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:34:09.488765   53522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-902210 NodeName:pause-902210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:34:09.488926   53522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-902210"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:34:09.488995   53522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:34:09.500107   53522 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:34:09.500181   53522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:34:09.511192   53522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 11:34:09.531816   53522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:34:09.550711   53522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 11:34:09.572905   53522 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I0916 11:34:09.578540   53522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:34:09.758285   53522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:34:09.777369   53522 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210 for IP: 192.168.39.244
	I0916 11:34:09.777392   53522 certs.go:194] generating shared ca certs ...
	I0916 11:34:09.777410   53522 certs.go:226] acquiring lock for ca certs: {Name:mkc5eb18b90c7d501da61b378999fd65b785fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:34:09.777593   53522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key
	I0916 11:34:09.777653   53522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key
	I0916 11:34:09.777666   53522 certs.go:256] generating profile certs ...
	I0916 11:34:09.777765   53522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/client.key
	I0916 11:34:09.777841   53522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/apiserver.key.ad3526e3
	I0916 11:34:09.777901   53522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/proxy-client.key
	I0916 11:34:09.778052   53522 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem (1338 bytes)
	W0916 11:34:09.778099   53522 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203_empty.pem, impossibly tiny 0 bytes
	I0916 11:34:09.778110   53522 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:34:09.778141   53522 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/ca.pem (1082 bytes)
	I0916 11:34:09.778190   53522 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:34:09.778229   53522 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/certs/key.pem (1679 bytes)
	I0916 11:34:09.778287   53522 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem (1708 bytes)
	I0916 11:34:09.779079   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:34:09.809951   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:34:09.838293   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:34:09.866783   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:34:09.892624   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 11:34:09.919241   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:34:09.948084   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:34:09.977023   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:34:10.008307   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/ssl/certs/112032.pem --> /usr/share/ca-certificates/112032.pem (1708 bytes)
	I0916 11:34:10.038665   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:34:10.070465   53522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3851/.minikube/certs/11203.pem --> /usr/share/ca-certificates/11203.pem (1338 bytes)
	I0916 11:34:10.103815   53522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:34:10.125527   53522 ssh_runner.go:195] Run: openssl version
	I0916 11:34:10.133283   53522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112032.pem && ln -fs /usr/share/ca-certificates/112032.pem /etc/ssl/certs/112032.pem"
	I0916 11:34:10.147153   53522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112032.pem
	I0916 11:34:10.153047   53522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:33 /usr/share/ca-certificates/112032.pem
	I0916 11:34:10.153111   53522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112032.pem
	I0916 11:34:10.159805   53522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112032.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:34:10.170462   53522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:34:10.182186   53522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:34:10.187350   53522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:34:10.187413   53522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:34:10.193865   53522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:34:10.204260   53522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11203.pem && ln -fs /usr/share/ca-certificates/11203.pem /etc/ssl/certs/11203.pem"
	I0916 11:34:10.215904   53522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11203.pem
	I0916 11:34:10.220726   53522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:33 /usr/share/ca-certificates/11203.pem
	I0916 11:34:10.220792   53522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11203.pem
	I0916 11:34:10.227272   53522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11203.pem /etc/ssl/certs/51391683.0"
	I0916 11:34:10.238629   53522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:34:10.243746   53522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:34:10.249956   53522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:34:10.255923   53522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:34:10.261721   53522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:34:10.268612   53522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:34:10.274780   53522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:34:10.281193   53522 kubeadm.go:392] StartCluster: {Name:pause-902210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-902210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:34:10.281342   53522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 11:34:10.281414   53522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:34:10.328021   53522 cri.go:89] found id: "4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f"
	I0916 11:34:10.328045   53522 cri.go:89] found id: "49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70"
	I0916 11:34:10.328050   53522 cri.go:89] found id: "a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846"
	I0916 11:34:10.328060   53522 cri.go:89] found id: "0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31"
	I0916 11:34:10.328064   53522 cri.go:89] found id: "c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657"
	I0916 11:34:10.328068   53522 cri.go:89] found id: "5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011"
	I0916 11:34:10.328072   53522 cri.go:89] found id: ""
	I0916 11:34:10.328124   53522 ssh_runner.go:195] Run: sudo runc list -f json
	I0916 11:34:10.355827   53522 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31/userdata","rootfs":"/var/lib/containers/storage/overlay/c3b5f8149633a2a8547cc3488605938db2f527d48d3b48267ee9bde9dd9d0d28/merged","created":"2024-09-16T11:33:12.989514782Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7df2713b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7df2713b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.884955987Z","io.kubernetes.cri-o.Image":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri-o.ImageRef":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-902210\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e8faf4de18fb11f28f2a94abdbae8634\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-902210_e8faf4de18fb11f28f2a94abdbae8634/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c3b5f8149633a2a8547cc3488605938db2f527d48d3b48267ee9bde9dd9d0d28/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-902210_kube-system_e8faf4de18fb11f28f2a94abdbae8634_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-902210_kube-system_e8faf4de18fb11f28f2a94abdbae8634_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e8faf4de18fb11f28f2a94abdbae8634/etc-hosts\",\"readonly\":false,\"
propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e8faf4de18fb11f28f2a94abdbae8634/containers/kube-apiserver/798e1fab\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e8faf4de18fb11f28f2a94abdbae8634","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.244:8443","kubernetes.io/config
.hash":"e8faf4de18fb11f28f2a94abdbae8634","kubernetes.io/config.seen":"2024-09-16T11:33:12.095297780Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3/userdata","rootfs":"/var/lib/containers/storage/overlay/a16524f96096665c6e145280eee59c3188a3aa1fd23cba0c2d45c5ee106bdaba/merged","created":"2024-09-16T11:33:12.72728129Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"e8faf4de18fb11f28f2a94abdbae8634\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.244:8443\",\"kubernetes.io/config.seen\":\"2024-09-16T11:33:12.095297780Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/k
ubepods/burstable/pode8faf4de18fb11f28f2a94abdbae8634","io.kubernetes.cri-o.ContainerID":"29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-902210_kube-system_e8faf4de18fb11f28f2a94abdbae8634_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.575946312Z","io.kubernetes.cri-o.HostName":"pause-902210","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-902210","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-902210\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.ku
bernetes.pod.uid\":\"e8faf4de18fb11f28f2a94abdbae8634\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-902210_e8faf4de18fb11f28f2a94abdbae8634/29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-902210\",\"uid\":\"e8faf4de18fb11f28f2a94abdbae8634\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a16524f96096665c6e145280eee59c3188a3aa1fd23cba0c2d45c5ee106bdaba/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-902210_kube-system_e8faf4de18fb11f28f2a94abdbae8634_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernet
es.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-902210_kube-system_e8faf4de18fb11f28f2a94abdbae8634_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e8faf4de18fb11f28f2a94abdbae8634","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.244:8443","kubernetes.io/config.hash":"e8faf4de18fb11f28f2a94abdbae8634","kubernetes.io/config.seen":"2024-09-16T11:33:1
2.095297780Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b/userdata","rootfs":"/var/lib/containers/storage/overlay/525c2b9674d2273594455bd2486f4255fe27db3a6bd61421abf11242a931d733/merged","created":"2024-09-16T11:33:12.685583873Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T11:33:12.095301197Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"0592e3be48f4757891c8c929898ec58a\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod0592e3be48f4757891c8c929898ec58a","io.kubernetes.cri-o.ContainerID":"43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b","
io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-902210_kube-system_0592e3be48f4757891c8c929898ec58a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.573590973Z","io.kubernetes.cri-o.HostName":"pause-902210","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-902210","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-scheduler-pause-902210\",\"component\":\"kube-scheduler\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"0592e3be48f4757891c8c929898ec58a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_k
ube-scheduler-pause-902210_0592e3be48f4757891c8c929898ec58a/43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-902210\",\"uid\":\"0592e3be48f4757891c8c929898ec58a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/525c2b9674d2273594455bd2486f4255fe27db3a6bd61421abf11242a931d733/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-902210_kube-system_0592e3be48f4757891c8c929898ec58a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/43c7fc47494a7e3ce46b06c81587765792a6b0ab8
31101a69abc5f965055194b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-902210_kube-system_0592e3be48f4757891c8c929898ec58a_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0592e3be48f4757891c8c929898ec58a","kubernetes.io/config.hash":"0592e3be48f4757891c8c929898ec58a","kubernetes.io/config.seen":"2024-09-16T11:33:12.095301197Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f","pid":0,"status":"stopped","bundle":"/run/containers/sto
rage/overlay-containers/4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f/userdata","rootfs":"/var/lib/containers/storage/overlay/51c774f6d67ce47410b2a449ca7fa8bd1dd9dea4dbd9f267c2c731fa701c759e/merged","created":"2024-09-16T11:33:26.03506905Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a3a204d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a3a204d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\
"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:33:25.982322723Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri-o.ImageRef":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-
kfklg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3ae628ea-d24e-4adf-92a6-6cc965d6ba0c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-kfklg_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/51c774f6d67ce47410b2a449ca7fa8bd1dd9dea4dbd9f267c2c731fa701c759e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7c65d6cfc9-kfklg_kube-system_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-kfklg_kube-system_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c_0","io.kubernetes.cri-o.
SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/3ae628ea-d24e-4adf-92a6-6cc965d6ba0c/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3ae628ea-d24e-4adf-92a6-6cc965d6ba0c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3ae628ea-d24e-4adf-92a6-6cc965d6ba0c/containers/coredns/50ab9251\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/3ae628ea-d24e-4adf-92a6-6cc965d6ba0c/volumes/kubernetes.io~projected/kube-api-access-gfcnc\",\"readonly\":true,\"propa
gation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7c65d6cfc9-kfklg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3ae628ea-d24e-4adf-92a6-6cc965d6ba0c","kubernetes.io/config.seen":"2024-09-16T11:33:23.955364855Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70/userdata","rootfs":"/var/lib/containers/storage/overlay/ef2368810b3d3bcfd46a5a42164899e27f56252b11d0f9bbb6a8ed1a1222407e/merged","created":"2024-09-16T11:33:24.395872877Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159dcc59","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","
io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159dcc59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:33:24.310710567Z","io.kubernetes.cri-o.Image":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri-o.ImageRef":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-j7c5t\",\"io.kubernetes.pod.namespace\":\"kube-system
\",\"io.kubernetes.pod.uid\":\"313acb7c-c9f0-4886-a173-b05701589b46\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-j7c5t_313acb7c-c9f0-4886-a173-b05701589b46/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ef2368810b3d3bcfd46a5a42164899e27f56252b11d0f9bbb6a8ed1a1222407e/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-j7c5t_kube-system_313acb7c-c9f0-4886-a173-b05701589b46_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-j7c5t_kube-system_313acb7c-c9f0-4886-a173-b05701589b46_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false"
,"io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/313acb7c-c9f0-4886-a173-b05701589b46/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/313acb7c-c9f0-4886-a173-b05701589b46/containers/kube-proxy/e5715201\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/313acb7c-c9f0-4886-a173-b05701589b46/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_p
ath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/313acb7c-c9f0-4886-a173-b05701589b46/volumes/kubernetes.io~projected/kube-api-access-8pjkl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-j7c5t","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"313acb7c-c9f0-4886-a173-b05701589b46","kubernetes.io/config.seen":"2024-09-16T11:33:23.247704480Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34/userdata","rootfs":"/var/lib/containers/storage/overlay/da731edde200e300afc877817a1de4bb3ca0e87e43cfcb4789f21dc719209047/merged","created":"2024-09-16T11:33:12.699993929Z","annotations":{"component":"etcd","io.contai
ner.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"aa4b808106c85e3bfcc0d9238dd2b13e\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.244:2379\",\"kubernetes.io/config.seen\":\"2024-09-16T11:33:12.095292610Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podaa4b808106c85e3bfcc0d9238dd2b13e","io.kubernetes.cri-o.ContainerID":"50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-902210_kube-system_aa4b808106c85e3bfcc0d9238dd2b13e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.579283729Z","io.kubernetes.cri-o.HostName":"pause-902210","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34/userdata/hostname","i
o.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-pause-902210","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"aa4b808106c85e3bfcc0d9238dd2b13e\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-902210\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-902210_aa4b808106c85e3bfcc0d9238dd2b13e/50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-902210\",\"uid\":\"aa4b808106c85e3bfcc0d9238dd2b13e\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/da731edde200e300afc877817a1de4bb3ca0e87e43cfcb4789f21dc719209047/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-902210_kube-system_aa4b808106c85e3bfcc0d9238dd2b13e_0","io.kubernete
s.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-902210_kube-system_aa4b808106c85e3bfcc0d9238dd2b13e_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34/userdata/shm","io.kubernetes.pod.name":"etcd-pause-902210"
,"io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aa4b808106c85e3bfcc0d9238dd2b13e","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.244:2379","kubernetes.io/config.hash":"aa4b808106c85e3bfcc0d9238dd2b13e","kubernetes.io/config.seen":"2024-09-16T11:33:12.095292610Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011/userdata","rootfs":"/var/lib/containers/storage/overlay/3de21babe68043cd455e479a1c86461c4812b9ac1839a10aaa59b7634f48880c/merged","created":"2024-09-16T11:33:12.929548214Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev
/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.806912847Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-902210\",\"io.kubernetes.pod.namespace\":\"ku
be-system\",\"io.kubernetes.pod.uid\":\"aa4b808106c85e3bfcc0d9238dd2b13e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-902210_aa4b808106c85e3bfcc0d9238dd2b13e/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3de21babe68043cd455e479a1c86461c4812b9ac1839a10aaa59b7634f48880c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-902210_kube-system_aa4b808106c85e3bfcc0d9238dd2b13e_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-902210_kube-system_aa4b808106c85e3bfcc0d9238dd2b13e_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cr
i-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aa4b808106c85e3bfcc0d9238dd2b13e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aa4b808106c85e3bfcc0d9238dd2b13e/containers/etcd/1565fc62\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aa4b808106c85e3bfcc0d9238dd2b13e","kubeadm.kubernetes.io/etcd.adve
rtise-client-urls":"https://192.168.39.244:2379","kubernetes.io/config.hash":"aa4b808106c85e3bfcc0d9238dd2b13e","kubernetes.io/config.seen":"2024-09-16T11:33:12.095292610Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba/userdata","rootfs":"/var/lib/containers/storage/overlay/1723ae290b05a002c63fb493b71e76af815325d7e1cf9ae9809da62472761ea8/merged","created":"2024-09-16T11:33:24.219544394Z","annotations":{"controller-revision-hash":"648b489c5b","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T11:33:23.247704480Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/besteffort/pod313acb7c-c9f0-4886-a173-b05701589b46","io.kubernetes.cri
-o.ContainerID":"63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-j7c5t_kube-system_313acb7c-c9f0-4886-a173-b05701589b46_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T11:33:24.158860039Z","io.kubernetes.cri-o.HostName":"pause-902210","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-proxy-j7c5t","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"648b489c5b\",\"io.kubernetes.container.name\":\"POD\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.pod.uid\":\"313acb7c-c9f0-4886-a173-b05701589b46\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io
.kubernetes.pod.name\":\"kube-proxy-j7c5t\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-j7c5t_313acb7c-c9f0-4886-a173-b05701589b46/63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-j7c5t\",\"uid\":\"313acb7c-c9f0-4886-a173-b05701589b46\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1723ae290b05a002c63fb493b71e76af815325d7e1cf9ae9809da62472761ea8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-j7c5t_kube-system_313acb7c-c9f0-4886-a173-b05701589b46_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":2,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/ru
n/containers/storage/overlay-containers/63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-j7c5t_kube-system_313acb7c-c9f0-4886-a173-b05701589b46_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba/userdata/shm","io.kubernetes.pod.name":"kube-proxy-j7c5t","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"313acb7c-c9f0-4886-a173-b05701589b46","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2024-09-16T11:33:23.247704480Z","kubernetes.io/config.source":"api","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846","pid":0,"status":"stopp
ed","bundle":"/run/containers/storage/overlay-containers/a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846/userdata","rootfs":"/var/lib/containers/storage/overlay/dcd05bc380cd0e43d78e2eb3173320030a12106463d53b8c805a8c1cd0b87779/merged","created":"2024-09-16T11:33:13.056154436Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d1900d79","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d1900d79\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c4198
0c986846","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.933630497Z","io.kubernetes.cri-o.Image":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri-o.ImageRef":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-902210\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"51eece17c3a985b628db0a0b01c853d7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-902210_51eece17c3a985b628db0a0b01c853d7/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dcd05bc380cd0e43d78e2eb3173320030a12106463d53b8c805a8c1
cd0b87779/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-902210_kube-system_51eece17c3a985b628db0a0b01c853d7_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-902210_kube-system_51eece17c3a985b628db0a0b01c853d7_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/51eece17c3a985b628db0a0b01c853d7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"
host_path\":\"/var/lib/kubelet/pods/51eece17c3a985b628db0a0b01c853d7/containers/kube-controller-manager/4c8f6594\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","
io.kubernetes.pod.name":"kube-controller-manager-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"51eece17c3a985b628db0a0b01c853d7","kubernetes.io/config.hash":"51eece17c3a985b628db0a0b01c853d7","kubernetes.io/config.seen":"2024-09-16T11:33:12.095299655Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac/userdata","rootfs":"/var/lib/containers/storage/overlay/d5fe44da19937d6abb8e6c85ce20a6bf7eba52a1e30a510bb6fabd1222e62538/merged","created":"2024-09-16T11:33:12.725057835Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T11:33:12.09529965
5Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"51eece17c3a985b628db0a0b01c853d7\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod51eece17c3a985b628db0a0b01c853d7","io.kubernetes.cri-o.ContainerID":"b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-902210_kube-system_51eece17c3a985b628db0a0b01c853d7_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.596106457Z","io.kubernetes.cri-o.HostName":"pause-902210","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-902210","io.kubernetes.cri-o.Labels":"{\"i
o.kubernetes.pod.uid\":\"51eece17c3a985b628db0a0b01c853d7\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-902210\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-902210_51eece17c3a985b628db0a0b01c853d7/b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-902210\",\"uid\":\"51eece17c3a985b628db0a0b01c853d7\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d5fe44da19937d6abb8e6c85ce20a6bf7eba52a1e30a510bb6fabd1222e62538/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-902210_kube-system_51eece17c3a985b628db0a0b01c853d7_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.ku
bernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-902210_kube-system_51eece17c3a985b628db0a0b01c853d7_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid"
:"51eece17c3a985b628db0a0b01c853d7","kubernetes.io/config.hash":"51eece17c3a985b628db0a0b01c853d7","kubernetes.io/config.seen":"2024-09-16T11:33:12.095299655Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657/userdata","rootfs":"/var/lib/containers/storage/overlay/c3d303c47a774fedf95c1e60393963e889e62b8efd2c3cd834103e2939fdbe32/merged","created":"2024-09-16T11:33:13.01270898Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"12faacf7","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.ha
sh\":\"12faacf7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-09-16T11:33:12.850648006Z","io.kubernetes.cri-o.Image":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri-o.ImageRef":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-902210\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0592e3be48f4757891c8c929898ec58a\"}","io.kubernetes.cri-o.LogPath":"/va
r/log/pods/kube-system_kube-scheduler-pause-902210_0592e3be48f4757891c8c929898ec58a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c3d303c47a774fedf95c1e60393963e889e62b8efd2c3cd834103e2939fdbe32/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-902210_kube-system_0592e3be48f4757891c8c929898ec58a_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-902210_kube-system_0592e3be48f4757891c8c929898ec58a_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"f
alse","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0592e3be48f4757891c8c929898ec58a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0592e3be48f4757891c8c929898ec58a/containers/kube-scheduler/e46cdf9d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-902210","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0592e3be48f4757891c8c929898ec58a","kubernetes.io/config.hash":"0592e3be48f4757891c8c929898ec58a","kubernetes.io/config.seen":"2024-09-16T11:33:12.095301197Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e4
28b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845/userdata","rootfs":"/var/lib/containers/storage/overlay/62d031f87d90dc9c5c3e0da30f77170b5d9188b27c04bbf9bd9692198dff76b1/merged","created":"2024-09-16T11:33:25.913051458Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-09-16T11:33:23.955364855Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"36:8b:03:1b:c9:e6\"},{\"name\":\"vetha93979f6\",\"mac\":\"7e:31:5b:67:42:96\"},{\"name\":\"eth0\",\"mac\":\"a6:46:64:34:c2:0c\",\"sandbox\":\"/var/run/netns/5c1c6c3a-f18f-4cdf-9de3-835e9fa5e0eb\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0
.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod3ae628ea-d24e-4adf-92a6-6cc965d6ba0c","io.kubernetes.cri-o.ContainerID":"e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-7c65d6cfc9-kfklg_kube-system_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-09-16T11:33:25.767995099Z","io.kubernetes.cri-o.HostName":"coredns-7c65d6cfc9-kfklg","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"coredns-7c65d6cfc9-kfklg","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\",\"io.kubern
etes.pod.uid\":\"3ae628ea-d24e-4adf-92a6-6cc965d6ba0c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-7c65d6cfc9-kfklg\",\"pod-template-hash\":\"7c65d6cfc9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7c65d6cfc9-kfklg_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c/e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-7c65d6cfc9-kfklg\",\"uid\":\"3ae628ea-d24e-4adf-92a6-6cc965d6ba0c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/62d031f87d90dc9c5c3e0da30f77170b5d9188b27c04bbf9bd9692198dff76b1/merged","io.kubernetes.cri-o.Name":"k8s_coredns-7c65d6cfc9-kfklg_kube-system_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"memo
ry_limit_in_bytes\":178257920,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7c65d6cfc9-kfklg_kube-system_3ae628ea-d24e-4adf-92a6-6cc965d6ba0c_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845/userdata/shm","io.kubernetes.pod.name":"coredns-7c65d6cfc9-kfklg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"3ae628ea-d24e-4adf-92a6-6cc965d6ba0c","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-09-16T11:33
:23.955364855Z","kubernetes.io/config.source":"api","pod-template-hash":"7c65d6cfc9"},"owner":"root"}]
	I0916 11:34:10.356409   53522 cri.go:126] list returned 12 containers
	I0916 11:34:10.356425   53522 cri.go:129] container: {ID:0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31 Status:stopped}
	I0916 11:34:10.356439   53522 cri.go:135] skipping {0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31 stopped}: state = "stopped", want "paused"
	I0916 11:34:10.356447   53522 cri.go:129] container: {ID:29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3 Status:stopped}
	I0916 11:34:10.356458   53522 cri.go:131] skipping 29abde9ebdac1aacdcdebf98c948c4428971f203c577e6e423fc639b953c65c3 - not in ps
	I0916 11:34:10.356464   53522 cri.go:129] container: {ID:43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b Status:stopped}
	I0916 11:34:10.356474   53522 cri.go:131] skipping 43c7fc47494a7e3ce46b06c81587765792a6b0ab831101a69abc5f965055194b - not in ps
	I0916 11:34:10.356478   53522 cri.go:129] container: {ID:4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f Status:stopped}
	I0916 11:34:10.356484   53522 cri.go:135] skipping {4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f stopped}: state = "stopped", want "paused"
	I0916 11:34:10.356488   53522 cri.go:129] container: {ID:49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70 Status:stopped}
	I0916 11:34:10.356493   53522 cri.go:135] skipping {49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70 stopped}: state = "stopped", want "paused"
	I0916 11:34:10.356497   53522 cri.go:129] container: {ID:50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34 Status:stopped}
	I0916 11:34:10.356502   53522 cri.go:131] skipping 50d7912d6c96280fb3936535adde32caebc380f3668e6a8938a330a765af9d34 - not in ps
	I0916 11:34:10.356508   53522 cri.go:129] container: {ID:5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011 Status:stopped}
	I0916 11:34:10.356512   53522 cri.go:135] skipping {5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011 stopped}: state = "stopped", want "paused"
	I0916 11:34:10.356516   53522 cri.go:129] container: {ID:63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba Status:stopped}
	I0916 11:34:10.356522   53522 cri.go:131] skipping 63d0171b5901e4f61bbaf89a371028220168a1360ee0033b1ad6b2fef42a2bba - not in ps
	I0916 11:34:10.356525   53522 cri.go:129] container: {ID:a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846 Status:stopped}
	I0916 11:34:10.356537   53522 cri.go:135] skipping {a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846 stopped}: state = "stopped", want "paused"
	I0916 11:34:10.356547   53522 cri.go:129] container: {ID:b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac Status:stopped}
	I0916 11:34:10.356555   53522 cri.go:131] skipping b182dc24c34926e75a9f85377f1ea3b9b572a1519ffea77b9a1a8d4d7672f8ac - not in ps
	I0916 11:34:10.356560   53522 cri.go:129] container: {ID:c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657 Status:stopped}
	I0916 11:34:10.356568   53522 cri.go:135] skipping {c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657 stopped}: state = "stopped", want "paused"
	I0916 11:34:10.356575   53522 cri.go:129] container: {ID:e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845 Status:stopped}
	I0916 11:34:10.356583   53522 cri.go:131] skipping e428b19997ebf355a383157dfa311ad7477ccf66c6feeced920ec710a58e2845 - not in ps
	I0916 11:34:10.356631   53522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:34:10.369552   53522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:34:10.369575   53522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:34:10.369634   53522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:34:10.381166   53522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:34:10.382018   53522 kubeconfig.go:125] found "pause-902210" server: "https://192.168.39.244:8443"
	I0916 11:34:10.383038   53522 kapi.go:59] client config for pause-902210: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/pause-902210/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:34:10.383587   53522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:34:10.394656   53522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.244
	I0916 11:34:10.394696   53522 kubeadm.go:1160] stopping kube-system containers ...
	I0916 11:34:10.394709   53522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0916 11:34:10.394767   53522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:34:10.440154   53522 cri.go:89] found id: "4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f"
	I0916 11:34:10.440185   53522 cri.go:89] found id: "49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70"
	I0916 11:34:10.440192   53522 cri.go:89] found id: "a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846"
	I0916 11:34:10.440206   53522 cri.go:89] found id: "0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31"
	I0916 11:34:10.440210   53522 cri.go:89] found id: "c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657"
	I0916 11:34:10.440215   53522 cri.go:89] found id: "5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011"
	I0916 11:34:10.440218   53522 cri.go:89] found id: ""
	I0916 11:34:10.440224   53522 cri.go:252] Stopping containers: [4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f 49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70 a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846 0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31 c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657 5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011]
	I0916 11:34:10.440286   53522 ssh_runner.go:195] Run: which crictl
	I0916 11:34:10.444426   53522 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 4798a782313b3452b367d9d5b20e11c3b429d06db86ea445191382b8ec01100f 49c77760ba1b5aee58544eeab79b6daf825b90d2a8d7dca8db6f414e450bfb70 a56269c3c3093b87fb3b2a409d654f2d37ea2b7d1fb8567dc19c41980c986846 0b63c14c2d42d5f5cab24ddd84c1c439c47b24c3a48e1b8c442ff40973000a31 c7437b8737ae1ab6e000cfdac446c179030f3ce76c6fcafb3529cb01a8258657 5f8e91d7b0eeb422ecddf7c5480d602d455a56944f48218a3bc9442ba80c0011
	I0916 11:34:10.521942   53522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 11:34:10.560308   53522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:34:10.574967   53522 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 16 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Sep 16 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 16 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Sep 16 11:33 /etc/kubernetes/scheduler.conf
	
	I0916 11:34:10.575040   53522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:34:10.588489   53522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:34:10.601876   53522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:34:10.614737   53522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:34:10.614796   53522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:34:10.625665   53522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:34:10.636155   53522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:34:10.636226   53522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:34:10.648146   53522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:34:10.660044   53522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:10.725749   53522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:11.538136   53522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:11.795149   53522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:11.873678   53522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:11.962477   53522 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:34:11.962602   53522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:12.463684   53522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:12.963386   53522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:13.012912   53522 api_server.go:72] duration metric: took 1.050433843s to wait for apiserver process to appear ...
	I0916 11:34:13.012955   53522 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:34:13.012979   53522 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0916 11:34:13.013524   53522 api_server.go:269] stopped: https://192.168.39.244:8443/healthz: Get "https://192.168.39.244:8443/healthz": dial tcp 192.168.39.244:8443: connect: connection refused
	I0916 11:34:13.022303   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:13.022334   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:13.022345   53296 retry.go:31] will retry after 811.510302ms: only 1 pod(s) have shown up
	I0916 11:34:13.849766   53296 system_pods.go:59] 1 kube-system pods found
	I0916 11:34:13.849865   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Pending
	I0916 11:34:13.849894   53296 retry.go:31] will retry after 934.184088ms: only 1 pod(s) have shown up
	I0916 11:34:14.789573   53296 system_pods.go:59] 3 kube-system pods found
	I0916 11:34:14.789614   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:34:14.789627   53296 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-045794" [fd8f7306-a4db-4468-8e22-2be83634a8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:34:14.789636   53296 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-045794" [3111576c-af64-4201-8b8c-2742e43fb8b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:34:14.789646   53296 system_pods.go:74] duration metric: took 4.705580511s to wait for pod list to return data ...
	I0916 11:34:14.789656   53296 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:34:14.793823   53296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:34:14.793864   53296 node_conditions.go:123] node cpu capacity is 2
	I0916 11:34:14.793877   53296 node_conditions.go:105] duration metric: took 4.212152ms to run NodePressure ...
	I0916 11:34:14.793900   53296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 11:34:15.070997   53296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:34:15.083048   53296 ops.go:34] apiserver oom_adj: -16
	I0916 11:34:15.083078   53296 kubeadm.go:597] duration metric: took 12.586964483s to restartPrimaryControlPlane
	I0916 11:34:15.083091   53296 kubeadm.go:394] duration metric: took 12.645178511s to StartCluster
	I0916 11:34:15.083113   53296 settings.go:142] acquiring lock: {Name:mka3093e7066e81538e0a2685a28fdafde8d951b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:34:15.083212   53296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 11:34:15.084341   53296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3851/kubeconfig: {Name:mkd6460249badead51f07eabf604da45528f6a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:34:15.084660   53296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 11:34:15.084722   53296 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:34:15.084806   53296 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-045794"
	I0916 11:34:15.084823   53296 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-045794"
	I0916 11:34:15.084845   53296 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-045794"
	I0916 11:34:15.084864   53296 host.go:66] Checking if "kubernetes-upgrade-045794" exists ...
	I0916 11:34:15.084881   53296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-045794"
	I0916 11:34:15.084900   53296 config.go:182] Loaded profile config "kubernetes-upgrade-045794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:34:15.085302   53296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:15.085349   53296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:15.085356   53296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:15.085380   53296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:15.090622   53296 out.go:177] * Verifying Kubernetes components...
	I0916 11:34:15.091987   53296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:34:15.104799   53296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0916 11:34:15.105483   53296 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:15.106100   53296 main.go:141] libmachine: Using API Version  1
	I0916 11:34:15.106127   53296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:15.106200   53296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0916 11:34:15.106688   53296 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:15.106760   53296 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:15.107407   53296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:15.107453   53296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:15.107783   53296 main.go:141] libmachine: Using API Version  1
	I0916 11:34:15.107800   53296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:15.108178   53296 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:15.108337   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetState
	I0916 11:34:15.111251   53296 kapi.go:59] client config for kubernetes-upgrade-045794: &rest.Config{Host:"https://192.168.72.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/profiles/kubernetes-upgrade-045794/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3851/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:34:15.111581   53296 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-045794"
	I0916 11:34:15.111618   53296 host.go:66] Checking if "kubernetes-upgrade-045794" exists ...
	I0916 11:34:15.111986   53296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:15.112023   53296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:15.130042   53296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0916 11:34:15.130511   53296 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:15.131017   53296 main.go:141] libmachine: Using API Version  1
	I0916 11:34:15.131037   53296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:15.131369   53296 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:15.132054   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetState
	I0916 11:34:15.133926   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:34:15.135860   53296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:34:15.137213   53296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:34:15.137238   53296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:34:15.137257   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:34:15.140463   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:34:15.140483   53296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I0916 11:34:15.141229   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:34:15.141239   53296 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:15.141257   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:34:15.141301   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:34:15.141689   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:34:15.141785   53296 main.go:141] libmachine: Using API Version  1
	I0916 11:34:15.141802   53296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:15.141805   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:34:15.141908   53296 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:34:15.142171   53296 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:15.142669   53296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:34:15.142698   53296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:34:15.164142   53296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0916 11:34:15.164682   53296 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:34:15.165472   53296 main.go:141] libmachine: Using API Version  1
	I0916 11:34:15.165496   53296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:34:15.165915   53296 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:34:15.166136   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetState
	I0916 11:34:15.168173   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .DriverName
	I0916 11:34:15.168440   53296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:34:15.168459   53296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:34:15.168479   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHHostname
	I0916 11:34:15.172119   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:34:15.172559   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c2:93", ip: ""} in network mk-kubernetes-upgrade-045794: {Iface:virbr4 ExpiryTime:2024-09-16 12:33:49 +0000 UTC Type:0 Mac:52:54:00:45:c2:93 Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:kubernetes-upgrade-045794 Clientid:01:52:54:00:45:c2:93}
	I0916 11:34:15.172580   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | domain kubernetes-upgrade-045794 has defined IP address 192.168.72.174 and MAC address 52:54:00:45:c2:93 in network mk-kubernetes-upgrade-045794
	I0916 11:34:15.172957   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHPort
	I0916 11:34:15.173403   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHKeyPath
	I0916 11:34:15.173785   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .GetSSHUsername
	I0916 11:34:15.174151   53296 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/kubernetes-upgrade-045794/id_rsa Username:docker}
	I0916 11:34:15.296188   53296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:34:15.323259   53296 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:34:15.323354   53296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:34:15.347163   53296 api_server.go:72] duration metric: took 262.466661ms to wait for apiserver process to appear ...
	I0916 11:34:15.347192   53296 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:34:15.347215   53296 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I0916 11:34:15.358748   53296 api_server.go:279] https://192.168.72.174:8443/healthz returned 200:
	ok
	I0916 11:34:15.360320   53296 api_server.go:141] control plane version: v1.31.1
	I0916 11:34:15.360347   53296 api_server.go:131] duration metric: took 13.147188ms to wait for apiserver health ...
	I0916 11:34:15.360370   53296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:34:15.372285   53296 system_pods.go:59] 3 kube-system pods found
	I0916 11:34:15.372329   53296 system_pods.go:61] "etcd-kubernetes-upgrade-045794" [ebe674f3-fc49-44e8-8f3e-7787238d965f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:34:15.372347   53296 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-045794" [fd8f7306-a4db-4468-8e22-2be83634a8c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:34:15.372357   53296 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-045794" [3111576c-af64-4201-8b8c-2742e43fb8b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:34:15.372365   53296 system_pods.go:74] duration metric: took 11.987835ms to wait for pod list to return data ...
	I0916 11:34:15.372378   53296 kubeadm.go:582] duration metric: took 287.684346ms to wait for: map[apiserver:true system_pods:true]
	I0916 11:34:15.372399   53296 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:34:15.377030   53296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 11:34:15.377059   53296 node_conditions.go:123] node cpu capacity is 2
	I0916 11:34:15.377072   53296 node_conditions.go:105] duration metric: took 4.666864ms to run NodePressure ...
	I0916 11:34:15.377085   53296 start.go:241] waiting for startup goroutines ...
	I0916 11:34:15.381645   53296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:34:15.472796   53296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:34:15.889355   53296 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:15.889381   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .Close
	I0916 11:34:15.889749   53296 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:15.889769   53296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:15.889778   53296 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:15.889785   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .Close
	I0916 11:34:15.890125   53296 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:15.890142   53296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:15.904404   53296 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:15.904432   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .Close
	I0916 11:34:15.904751   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Closing plugin on server side
	I0916 11:34:15.904762   53296 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:15.904775   53296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:16.158659   53296 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:16.158690   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .Close
	I0916 11:34:16.159085   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) DBG | Closing plugin on server side
	I0916 11:34:16.160322   53296 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:16.160338   53296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:16.160344   53296 main.go:141] libmachine: Making call to close driver server
	I0916 11:34:16.160352   53296 main.go:141] libmachine: (kubernetes-upgrade-045794) Calling .Close
	I0916 11:34:16.160574   53296 main.go:141] libmachine: Successfully made call to close driver server
	I0916 11:34:16.160589   53296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 11:34:16.163379   53296 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:34:16.165073   53296 addons.go:510] duration metric: took 1.080347227s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:34:16.165118   53296 start.go:246] waiting for cluster config update ...
	I0916 11:34:16.165153   53296 start.go:255] writing updated cluster config ...
	I0916 11:34:16.165455   53296 ssh_runner.go:195] Run: rm -f paused
	I0916 11:34:16.177335   53296 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-045794" cluster and "default" namespace by default
	E0916 11:34:16.178833   53296 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:34:11.820015   53156 pod_ready.go:98] pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:34:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.144 HostIPs:[{IP:192.168.61
.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 11:33:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 11:34:01 +0000 UTC,FinishedAt:2024-09-16 11:34:11 +0000 UTC,ContainerID:cri-o://566c6734061007d6e128c453ab147fea8a0429900f63988cc3782228623ac28b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://566c6734061007d6e128c453ab147fea8a0429900f63988cc3782228623ac28b Started:0xc0020bfb70 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020af710} {Name:kube-api-access-zg9rh MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0020af720}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:34:11.820052   53156 pod_ready.go:82] duration metric: took 11.007943439s for pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace to be "Ready" ...
	E0916 11:34:11.820067   53156 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-s8lq9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:34:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 11:33:59 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.144 HostIPs:[{IP:192.168.61.144}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 11:33:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 11:34:01 +0000 UTC,FinishedAt:2024-09-16 11:34:11 +0000 UTC,ContainerID:cri-o://566c6734061007d6e128c453ab147fea8a0429900f63988cc3782228623ac28b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://566c6734061007d6e128c453ab147fea8a0429900f63988cc3782228623ac28b Started:0xc0020bfb70 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020af710} {Name:kube-api-access-zg9rh MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0020af720}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 11:34:11.820082   53156 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-w2l57" in "kube-system" namespace to be "Ready" ...
	I0916 11:34:13.852579   53156 pod_ready.go:103] pod "coredns-7c65d6cfc9-w2l57" in "kube-system" namespace has status "Ready":"False"
	I0916 11:34:16.328253   53156 pod_ready.go:103] pod "coredns-7c65d6cfc9-w2l57" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.078743230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486457078709097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:209293,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54ebf3e6-bc8a-4a2a-a279-a4c46a3f71e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.079654504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aee509f0-18c9-4bc9-b041-89d1bb040091 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.079760308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aee509f0-18c9-4bc9-b041-89d1bb040091 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.079926774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ebc6e43d89d1ca2a5eef2587968282ad7c06ad385aa07017556fe2c68a64a5b,PodSandboxId:832f87631edd010e4939360113febb797f8b2c915eac759aba821b0546121834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486445463348270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe2c83933bece863f2535348c8dd1a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81953601a1706fb6d1cf690c9efea999703d69eed48ef11ed27757e62d5208e,PodSandboxId:ee3418c97b8820ec529f7c69175dcf85642af6eaa82a409df6ae6f8e6f873efd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486445394697088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683148bda2bae2ad9dbc71e0e098ba91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9e1aa5973677d692e76b91fb7f8ca02884f21532f05bea205d2f2f9f52c4a,PodSandboxId:637810c063477794b5b884a1fdef3d8bac694ab264da5b39d629d7525869c4cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486445357864708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ed5e432ad25e35d50ba3f14f19f84f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34628545a259a45227f5368b6d1eea021e0f3c04f2f63c7593b30cfbbdc191a0,PodSandboxId:a47db167173894f3124e687603ddf836bfc51231cf4f01c6939af0c6707351d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486445371882450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a5ca17a5fc7533eae7d52084e2da7aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aee509f0-18c9-4bc9-b041-89d1bb040091 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.124444424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8eeb903-473c-4984-9b81-29577f6bc2b8 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.124541150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8eeb903-473c-4984-9b81-29577f6bc2b8 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.125876972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63503ab9-8196-44a6-9f3c-5f9a0332b7de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.126733837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486457126704147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:209293,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63503ab9-8196-44a6-9f3c-5f9a0332b7de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.127624152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28a1d1bd-35dd-49ce-be0a-6cc0c8faa5af name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.127679142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28a1d1bd-35dd-49ce-be0a-6cc0c8faa5af name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.127800928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ebc6e43d89d1ca2a5eef2587968282ad7c06ad385aa07017556fe2c68a64a5b,PodSandboxId:832f87631edd010e4939360113febb797f8b2c915eac759aba821b0546121834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486445463348270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe2c83933bece863f2535348c8dd1a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81953601a1706fb6d1cf690c9efea999703d69eed48ef11ed27757e62d5208e,PodSandboxId:ee3418c97b8820ec529f7c69175dcf85642af6eaa82a409df6ae6f8e6f873efd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486445394697088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683148bda2bae2ad9dbc71e0e098ba91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9e1aa5973677d692e76b91fb7f8ca02884f21532f05bea205d2f2f9f52c4a,PodSandboxId:637810c063477794b5b884a1fdef3d8bac694ab264da5b39d629d7525869c4cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486445357864708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ed5e432ad25e35d50ba3f14f19f84f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34628545a259a45227f5368b6d1eea021e0f3c04f2f63c7593b30cfbbdc191a0,PodSandboxId:a47db167173894f3124e687603ddf836bfc51231cf4f01c6939af0c6707351d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486445371882450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a5ca17a5fc7533eae7d52084e2da7aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28a1d1bd-35dd-49ce-be0a-6cc0c8faa5af name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.173307540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0fe7269-bbd5-4e98-a5a4-82fa1f37b109 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.173377430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0fe7269-bbd5-4e98-a5a4-82fa1f37b109 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.175156492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5193e6fb-dbb9-41a2-a327-b1443e1b03ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.175739307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486457175716712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:209293,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5193e6fb-dbb9-41a2-a327-b1443e1b03ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.176402952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5fd6ce8-fad1-4ac8-968c-04b69669d37d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.176562726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5fd6ce8-fad1-4ac8-968c-04b69669d37d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.176687186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ebc6e43d89d1ca2a5eef2587968282ad7c06ad385aa07017556fe2c68a64a5b,PodSandboxId:832f87631edd010e4939360113febb797f8b2c915eac759aba821b0546121834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486445463348270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe2c83933bece863f2535348c8dd1a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81953601a1706fb6d1cf690c9efea999703d69eed48ef11ed27757e62d5208e,PodSandboxId:ee3418c97b8820ec529f7c69175dcf85642af6eaa82a409df6ae6f8e6f873efd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486445394697088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683148bda2bae2ad9dbc71e0e098ba91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9e1aa5973677d692e76b91fb7f8ca02884f21532f05bea205d2f2f9f52c4a,PodSandboxId:637810c063477794b5b884a1fdef3d8bac694ab264da5b39d629d7525869c4cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486445357864708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ed5e432ad25e35d50ba3f14f19f84f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34628545a259a45227f5368b6d1eea021e0f3c04f2f63c7593b30cfbbdc191a0,PodSandboxId:a47db167173894f3124e687603ddf836bfc51231cf4f01c6939af0c6707351d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486445371882450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a5ca17a5fc7533eae7d52084e2da7aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5fd6ce8-fad1-4ac8-968c-04b69669d37d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.216919011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0568afe2-925a-44e9-a6e2-224236fca647 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.217018796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0568afe2-925a-44e9-a6e2-224236fca647 name=/runtime.v1.RuntimeService/Version
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.218179104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ef8422a-6d4b-4c8c-bbe8-e9ae42a55f55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.218821865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486457218798148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:209293,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ef8422a-6d4b-4c8c-bbe8-e9ae42a55f55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.219455972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd27b3d7-c0a6-4128-b046-8b00d1b8c69d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.219555894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd27b3d7-c0a6-4128-b046-8b00d1b8c69d name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 11:34:17 kubernetes-upgrade-045794 crio[632]: time="2024-09-16 11:34:17.219656868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ebc6e43d89d1ca2a5eef2587968282ad7c06ad385aa07017556fe2c68a64a5b,PodSandboxId:832f87631edd010e4939360113febb797f8b2c915eac759aba821b0546121834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726486445463348270,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe2c83933bece863f2535348c8dd1a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81953601a1706fb6d1cf690c9efea999703d69eed48ef11ed27757e62d5208e,PodSandboxId:ee3418c97b8820ec529f7c69175dcf85642af6eaa82a409df6ae6f8e6f873efd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726486445394697088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683148bda2bae2ad9dbc71e0e098ba91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9e1aa5973677d692e76b91fb7f8ca02884f21532f05bea205d2f2f9f52c4a,PodSandboxId:637810c063477794b5b884a1fdef3d8bac694ab264da5b39d629d7525869c4cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726486445357864708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ed5e432ad25e35d50ba3f14f19f84f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34628545a259a45227f5368b6d1eea021e0f3c04f2f63c7593b30cfbbdc191a0,PodSandboxId:a47db167173894f3124e687603ddf836bfc51231cf4f01c6939af0c6707351d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726486445371882450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-045794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a5ca17a5fc7533eae7d52084e2da7aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd27b3d7-c0a6-4128-b046-8b00d1b8c69d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6ebc6e43d89d1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   11 seconds ago      Running             kube-controller-manager   0                   832f87631edd0       kube-controller-manager-kubernetes-upgrade-045794
	e81953601a170       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   11 seconds ago      Running             kube-apiserver            0                   ee3418c97b882       kube-apiserver-kubernetes-upgrade-045794
	34628545a259a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   11 seconds ago      Running             kube-scheduler            0                   a47db16717389       kube-scheduler-kubernetes-upgrade-045794
	35f9e1aa59736       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   11 seconds ago      Running             etcd                      0                   637810c063477       etcd-kubernetes-upgrade-045794
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-045794
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-045794
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:34:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "kubernetes-upgrade-045794" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:34:10 +0000   Mon, 16 Sep 2024 11:34:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:34:10 +0000   Mon, 16 Sep 2024 11:34:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:34:10 +0000   Mon, 16 Sep 2024 11:34:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:34:10 +0000   Mon, 16 Sep 2024 11:34:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.174
	  Hostname:    kubernetes-upgrade-045794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9f3db7655254e689344916b0df96bb0
	  System UUID:                e9f3db76-5525-4e68-9344-916b0df96bb0
	  Boot ID:                    ed5da403-1d74-48fd-ba7e-5d93fb723204
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-5x5pf                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2s
	  kube-system                 coredns-7c65d6cfc9-qscdh                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2s
	  kube-system                 etcd-kubernetes-upgrade-045794                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8s
	  kube-system                 kube-apiserver-kubernetes-upgrade-045794             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-045794    200m (10%)    0 (0%)      0 (0%)           0 (0%)         1s
	  kube-system                 kube-proxy-9h744                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-kubernetes-upgrade-045794             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node kubernetes-upgrade-045794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node kubernetes-upgrade-045794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x7 over 13s)  kubelet          Node kubernetes-upgrade-045794 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node kubernetes-upgrade-045794 event: Registered Node kubernetes-upgrade-045794 in Controller
	
	
	==> dmesg <==
	[Sep16 11:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055001] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050251] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.165413] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.679606] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.636175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.405565] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.063913] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068651] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.203647] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.165900] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.295525] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[Sep16 11:34] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +0.071684] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.365220] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[ +10.947631] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.102368] kauditd_printk_skb: 97 callbacks suppressed
	
	
	==> etcd [35f9e1aa5973677d692e76b91fb7f8ca02884f21532f05bea205d2f2f9f52c4a] <==
	{"level":"info","ts":"2024-09-16T11:34:05.784549Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:34:05.785686Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6939dc92cf6d5539","initial-advertise-peer-urls":["https://192.168.72.174:2380"],"listen-peer-urls":["https://192.168.72.174:2380"],"advertise-client-urls":["https://192.168.72.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:34:05.787316Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:34:05.788309Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.174:2380"}
	{"level":"info","ts":"2024-09-16T11:34:05.788326Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.174:2380"}
	{"level":"info","ts":"2024-09-16T11:34:06.039262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6939dc92cf6d5539 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:34:06.039331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6939dc92cf6d5539 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:34:06.039351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6939dc92cf6d5539 received MsgPreVoteResp from 6939dc92cf6d5539 at term 1"}
	{"level":"info","ts":"2024-09-16T11:34:06.039365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6939dc92cf6d5539 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:34:06.039374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6939dc92cf6d5539 received MsgVoteResp from 6939dc92cf6d5539 at term 2"}
	{"level":"info","ts":"2024-09-16T11:34:06.039386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6939dc92cf6d5539 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:34:06.039396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6939dc92cf6d5539 elected leader 6939dc92cf6d5539 at term 2"}
	{"level":"info","ts":"2024-09-16T11:34:06.042383Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:34:06.044444Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6939dc92cf6d5539","local-member-attributes":"{Name:kubernetes-upgrade-045794 ClientURLs:[https://192.168.72.174:2379]}","request-path":"/0/members/6939dc92cf6d5539/attributes","cluster-id":"62a57965adf09bec","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:34:06.044489Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:34:06.044790Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:34:06.045655Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:34:06.053883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.174:2379"}
	{"level":"info","ts":"2024-09-16T11:34:06.060635Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:34:06.065894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:34:06.066020Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"62a57965adf09bec","local-member-id":"6939dc92cf6d5539","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:34:06.077217Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:34:06.077273Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:34:06.076126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:34:06.077290Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:34:17 up 0 min,  0 users,  load average: 1.90, 0.44, 0.14
	Linux kubernetes-upgrade-045794 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e81953601a1706fb6d1cf690c9efea999703d69eed48ef11ed27757e62d5208e] <==
	I0916 11:34:08.184533       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:34:08.187536       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:34:08.188133       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:34:08.188190       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:34:08.188226       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:34:08.188231       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:34:08.188235       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:34:08.188240       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:34:08.208013       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:34:08.379850       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:34:08.989210       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:34:08.995584       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:34:08.995632       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:34:09.685791       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:34:09.739218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:34:09.900046       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:34:09.911026       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.174]
	I0916 11:34:09.914577       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:34:09.923459       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:34:10.062747       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:34:14.965690       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:34:15.003452       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:34:15.022919       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:34:15.472951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:34:15.621333       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6ebc6e43d89d1ca2a5eef2587968282ad7c06ad385aa07017556fe2c68a64a5b] <==
	I0916 11:34:14.818154       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 11:34:14.832514       1 shared_informer.go:320] Caches are synced for HPA
	I0916 11:34:14.836650       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-045794" podCIDRs=["10.244.0.0/24"]
	I0916 11:34:14.836763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-045794"
	I0916 11:34:14.836801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-045794"
	I0916 11:34:14.856431       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 11:34:14.856494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:34:14.943590       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:34:14.969312       1 shared_informer.go:320] Caches are synced for taint
	I0916 11:34:14.969482       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 11:34:14.969624       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-045794"
	I0916 11:34:14.969710       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 11:34:14.991260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-045794"
	I0916 11:34:15.007379       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:34:15.013675       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:34:15.015425       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:34:15.218393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-045794"
	I0916 11:34:15.439529       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:34:15.455541       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:34:15.455610       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:34:15.789301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="298.768465ms"
	I0916 11:34:15.832781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.263874ms"
	I0916 11:34:15.837478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="104.339µs"
	I0916 11:34:15.840246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="195.043µs"
	I0916 11:34:15.884221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.384µs"
	
	
	==> kube-scheduler [34628545a259a45227f5368b6d1eea021e0f3c04f2f63c7593b30cfbbdc191a0] <==
	W0916 11:34:08.149798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:34:08.149829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:08.149887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:34:08.149917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:08.149977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:34:08.150008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:08.154242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:34:08.154292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:08.982139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:34:08.982204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:08.993510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:34:08.993584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:09.008039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:34:09.008156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:09.221713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:34:09.221763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:09.263917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:34:09.263990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:09.308334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:34:09.308393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:34:09.383752       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:34:09.383788       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:34:09.393019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:34:09.393287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:34:11.841013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:34:14 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:14.695850     842 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486454695211889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:209293,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:34:14 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:14.699264     842 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726486454695211889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:209293,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.677905     842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-045794" podStartSLOduration=6.677879677 podStartE2EDuration="6.677879677s" podCreationTimestamp="2024-09-16 11:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:34:14.542637007 +0000 UTC m=+10.247900254" watchObservedRunningTime="2024-09-16 11:34:15.677879677 +0000 UTC m=+11.383142895"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: W0916 11:34:15.681427     842 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:kubernetes-upgrade-045794" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-045794' and this object
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:15.681506     842 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:kubernetes-upgrade-045794\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-045794' and this object" logger="UnhandledError"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.724530     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8969623-918d-49c5-8620-61f58d9d4e05-kube-proxy\") pod \"kube-proxy-9h744\" (UID: \"f8969623-918d-49c5-8620-61f58d9d4e05\") " pod="kube-system/kube-proxy-9h744"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.724565     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8969623-918d-49c5-8620-61f58d9d4e05-xtables-lock\") pod \"kube-proxy-9h744\" (UID: \"f8969623-918d-49c5-8620-61f58d9d4e05\") " pod="kube-system/kube-proxy-9h744"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.724583     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8969623-918d-49c5-8620-61f58d9d4e05-lib-modules\") pod \"kube-proxy-9h744\" (UID: \"f8969623-918d-49c5-8620-61f58d9d4e05\") " pod="kube-system/kube-proxy-9h744"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.724601     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g748l\" (UniqueName: \"kubernetes.io/projected/f8969623-918d-49c5-8620-61f58d9d4e05-kube-api-access-g748l\") pod \"kube-proxy-9h744\" (UID: \"f8969623-918d-49c5-8620-61f58d9d4e05\") " pod="kube-system/kube-proxy-9h744"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.825443     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/610c9766-4dbb-47ae-a4a6-b5f720fbf47b-config-volume\") pod \"coredns-7c65d6cfc9-5x5pf\" (UID: \"610c9766-4dbb-47ae-a4a6-b5f720fbf47b\") " pod="kube-system/coredns-7c65d6cfc9-5x5pf"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.825499     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5h8\" (UniqueName: \"kubernetes.io/projected/610c9766-4dbb-47ae-a4a6-b5f720fbf47b-kube-api-access-mj5h8\") pod \"coredns-7c65d6cfc9-5x5pf\" (UID: \"610c9766-4dbb-47ae-a4a6-b5f720fbf47b\") " pod="kube-system/coredns-7c65d6cfc9-5x5pf"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.825545     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f-config-volume\") pod \"coredns-7c65d6cfc9-qscdh\" (UID: \"c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f\") " pod="kube-system/coredns-7c65d6cfc9-qscdh"
	Sep 16 11:34:15 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:15.825569     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ww9\" (UniqueName: \"kubernetes.io/projected/c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f-kube-api-access-v2ww9\") pod \"coredns-7c65d6cfc9-qscdh\" (UID: \"c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f\") " pod="kube-system/coredns-7c65d6cfc9-qscdh"
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:16.227304     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbdz\" (UniqueName: \"kubernetes.io/projected/ea2a9969-7596-4641-8645-97cc73845102-kube-api-access-dqbdz\") pod \"storage-provisioner\" (UID: \"ea2a9969-7596-4641-8645-97cc73845102\") " pod="kube-system/storage-provisioner"
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:16.227452     842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea2a9969-7596-4641-8645-97cc73845102-tmp\") pod \"storage-provisioner\" (UID: \"ea2a9969-7596-4641-8645-97cc73845102\") " pod="kube-system/storage-provisioner"
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.857205     842 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.857257     842 projected.go:194] Error preparing data for projected volume kube-api-access-g748l for pod kube-system/kube-proxy-9h744: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.857369     842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8969623-918d-49c5-8620-61f58d9d4e05-kube-api-access-g748l podName:f8969623-918d-49c5-8620-61f58d9d4e05 nodeName:}" failed. No retries permitted until 2024-09-16 11:34:17.35733956 +0000 UTC m=+13.062602791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g748l" (UniqueName: "kubernetes.io/projected/f8969623-918d-49c5-8620-61f58d9d4e05-kube-api-access-g748l") pod "kube-proxy-9h744" (UID: "f8969623-918d-49c5-8620-61f58d9d4e05") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.939380     842 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.939421     842 projected.go:194] Error preparing data for projected volume kube-api-access-mj5h8 for pod kube-system/coredns-7c65d6cfc9-5x5pf: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.939480     842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/610c9766-4dbb-47ae-a4a6-b5f720fbf47b-kube-api-access-mj5h8 podName:610c9766-4dbb-47ae-a4a6-b5f720fbf47b nodeName:}" failed. No retries permitted until 2024-09-16 11:34:17.439463345 +0000 UTC m=+13.144726563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mj5h8" (UniqueName: "kubernetes.io/projected/610c9766-4dbb-47ae-a4a6-b5f720fbf47b-kube-api-access-mj5h8") pod "coredns-7c65d6cfc9-5x5pf" (UID: "610c9766-4dbb-47ae-a4a6-b5f720fbf47b") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.949330     842 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.949375     842 projected.go:194] Error preparing data for projected volume kube-api-access-v2ww9 for pod kube-system/coredns-7c65d6cfc9-qscdh: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:16 kubernetes-upgrade-045794 kubelet[842]: E0916 11:34:16.949430     842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f-kube-api-access-v2ww9 podName:c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f nodeName:}" failed. No retries permitted until 2024-09-16 11:34:17.449414714 +0000 UTC m=+13.154677932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2ww9" (UniqueName: "kubernetes.io/projected/c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f-kube-api-access-v2ww9") pod "coredns-7c65d6cfc9-qscdh" (UID: "c8c0d31c-aa7a-49f6-9d36-9c629a7fa36f") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 11:34:17 kubernetes-upgrade-045794 kubelet[842]: I0916 11:34:17.134418     842 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-045794 -n kubernetes-upgrade-045794
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-045794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-045794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (561.909µs)
helpers_test.go:263: kubectl --context kubernetes-upgrade-045794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "kubernetes-upgrade-045794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-045794
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-045794: (1.110611339s)
--- FAIL: TestKubernetesUpgrade (393.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.059s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.45:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.45:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (53m47s)
		TestNetworkPlugins/group/bridge (15m21s)
		TestNetworkPlugins/group/bridge/NetCatPod (13m59s)
		TestNetworkPlugins/group/enable-default-cni (16m34s)
		TestNetworkPlugins/group/enable-default-cni/NetCatPod (15m8s)
		TestNetworkPlugins/group/flannel (16m1s)
		TestNetworkPlugins/group/flannel/NetCatPod (14m42s)
		TestStartStop (53m39s)
		TestStartStop/group/old-k8s-version (15m8s)
		TestStartStop/group/old-k8s-version/serial (15m8s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (1s)

                                                
                                                
goroutine 6965 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 48 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000493860, 0xc000ae1bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc0005544f8, {0x4cf86a0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4db6de0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc0006ef0e0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0006ef0e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001c7900)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 194 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0004c6850, 0x2d)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001414d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004c6880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006acb00, {0x3767e60, 0xc000afd1a0}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006acb00, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2600 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001882050, 0x19)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001a83d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001882080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019f4800, {0x3767e60, 0xc0015267b0}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019f4800, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2577
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 6694 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc0019a1a50, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000ae3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019a1a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00056dc60, {0x3767e60, 0xc0015263c0}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00056dc60, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6679
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 99 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xff
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 98
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x167

                                                
                                                
goroutine 196 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 195
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6722 [select, 15 minutes]:
k8s.io/client-go/tools/watch.UntilWithoutRetry({0x378f170, 0xc00062bb90}, {0x3776d00, 0xc0018f60c0}, {0xc000addda8, 0x1, 0xc001da45a0?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/tools/watch/until.go:73 +0x2df
k8s.io/minikube/pkg/kapi.WaitForDeploymentToStabilize({0x37c4048, 0xc0014d5dc0}, {0x2929a58, 0x7}, {0x29270e8, 0x6}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/kapi/kapi.go:125 +0x589
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc000826b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:159 +0x30a
testing.tRunner(0xc000826b60, 0xc001527680)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 195 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc00139ff50, 0xc0000b1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x0?, 0xc00139ff50, 0xc00139ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 585 [IO wait, 108 minutes]:
internal/poll.runtime_pollWait(0x7f83cd2ce6d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001924000?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc001924000)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc001924000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001a16500)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001a16500)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc001242b40, {0x3781d70, 0xc001a16500})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc001242b40)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00126a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 582
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2601 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc001480f50, 0xc0000adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0xa0?, 0xc001480f50, 0xc001480f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0xc0019f6340?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5a1a45?, 0xc000209680?, 0xc0012630a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2577
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2560 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc000098750, 0xc002041f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0xc0?, 0xc000098750, 0xc000098798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000987d0?, 0xa194e5?, 0xc0019a0f40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2585
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6959 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6958
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 836 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc001483750, 0xc0014ecf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x30?, 0xc001483750, 0xc001483798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014837d0?, 0x5a1aa4?, 0xc00082c930?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 820
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2321 [chan receive, 53 minutes]:
testing.(*testContext).waitParallel(0xc00069c870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00126b380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00126b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00126b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00126b380, 0xc000218c40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 6720 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6719
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2057 [chan receive, 15 minutes]:
testing.(*T).Run(0xc001252680, {0x292ea7a?, 0x375e220?}, 0xc00199db00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001252680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc001252680, 0xc00078e780)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1998
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1929 [chan receive, 53 minutes]:
testing.(*T).Run(0xc000826ea0, {0x2925a4b?, 0x55b653?}, 0x3410c78)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000826ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000826ea0, 0x3410a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2319 [chan receive, 15 minutes]:
testing.(*T).Run(0xc00126b040, {0x2927010?, 0x0?}, 0xc00078e000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00126b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00126b040, 0xc000218740)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 6661 [chan receive]:
testing.(*T).Run(0xc00126a340, {0x2951f4f?, 0xc000097570?}, 0xc0004e2000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00126a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00126a340, 0xc00078e000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2319
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 820 [chan receive, 105 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000219e00, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 717
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2513 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a16740, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 182 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 122
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 183 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004c6880, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 122
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2338 [chan receive, 53 minutes]:
testing.(*testContext).waitParallel(0xc00069c870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00126b520)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00126b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00126b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00126b520, 0xc000218c80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 819 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 717
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2577 [chan receive, 45 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001882080, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2575
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6606 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc00044c1c0}, {0x3782400, 0xc000415d00}, 0x1, 0x0, 0xc00006fbe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc000453b90?}, 0x3b9aca00, 0xc00006fdd8?, 0x1, 0xc00006fbe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc000453b90}, 0xc001252000, {0xc00189f3c0, 0x19}, {0x2929a58, 0x7}, {0x2930d01, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc001252000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc001252000, 0xc0016161e0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2058
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2539 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018f6580, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2537
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 837 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 836
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2058 [chan receive, 15 minutes]:
testing.(*T).Run(0xc001252820, {0x292ea7a?, 0x375e220?}, 0xc0016161e0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001252820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc001252820, 0xc00078e800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1998
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 763 [chan send, 105 minutes]:
os/exec.(*Cmd).watchCtx(0xc00123cc00, 0xc000065340)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 762
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 6696 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6695
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2584 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2583
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6724 [sync.Cond.Wait, 15 minutes]:
sync.runtime_notifyListWait(0xc0018150c8, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00127db70?)
	/usr/local/go/src/sync/cond.go:71 +0x85
golang.org/x/net/http2.(*pipe).Read(0xc0018150b0, {0xc0014de800, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/pipe.go:76 +0xd6
golang.org/x/net/http2.transportResponseBody.Read({0x471fbd?}, {0xc0014de800?, 0xc0014854e0?, 0x533113?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2637 +0x65
encoding/json.(*Decoder).refill(0xc000664b40)
	/usr/local/go/src/encoding/json/stream.go:165 +0x188
encoding/json.(*Decoder).readValue(0xc000664b40)
	/usr/local/go/src/encoding/json/stream.go:140 +0x85
encoding/json.(*Decoder).Decode(0xc000664b40, {0x2519b00, 0xc001d1f728})
	/usr/local/go/src/encoding/json/stream.go:63 +0x75
k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc001df2840, {0xc001610800, 0x400, 0x400})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/framer/framer.go:151 +0x15c
k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc00023b310, 0x0, {0x37755f8, 0xc0018f6100})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/runtime/serializer/streaming/streaming.go:77 +0xa3
k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc001a13880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/rest/watch/decoder.go:49 +0x4b
k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc0018f60c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/watch/streamwatcher.go:105 +0xc7
created by k8s.io/apimachinery/pkg/watch.NewStreamWatcher in goroutine 6722
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/watch/streamwatcher.go:76 +0x105

                                                
                                                
goroutine 1998 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000493d40, 0xc0019eb008)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2318 [chan receive, 53 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00126ad00, 0x3410c78)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1929
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1872 [chan receive, 54 minutes]:
testing.(*T).Run(0xc000826680, {0x2925a4b?, 0x55b79c?}, 0xc0019eb008)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000826680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc000826680, 0x3410a38)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2059 [chan receive, 15 minutes]:
testing.(*T).Run(0xc0012529c0, {0x292ea7a?, 0x375e220?}, 0xc001527680)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0012529c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc0012529c0, 0xc00078e880)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1998
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2538 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2537
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 835 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000219950, 0x2a)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000aefd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000219e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001ca8ff0, {0x3767e60, 0xc0004e5650}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001ca8ff0, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 820
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2559 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001883250, 0x19)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0012e5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001883280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000503a30, {0x3767e60, 0xc001d61cb0}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000503a30, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2585
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2585 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001883280, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2583
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2339 [chan receive, 53 minutes]:
testing.(*testContext).waitParallel(0xc00069c870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00126b860)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00126b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00126b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00126b860, 0xc000218cc0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 975 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017a0a80, 0xc000065500)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 708
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 6903 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f170, 0xc0004535e0}, {0x3782400, 0xc0006b0fa0}, 0x1, 0x0, 0xc0012d5c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f170?, 0xc0006b4230?}, 0x3b9aca00, 0xc0012d5e10?, 0x1, 0xc0012d5c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f170, 0xc0006b4230}, 0xc00126a1a0, {0xc001f82108, 0x16}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f170, 0xc0006b4230}, 0xc00126a1a0, {0xc001f82108, 0x16}, {0x293d16c?, 0xc0013a1760?}, {0x55b653?, 0x4b1aaf?}, {0xc00055e300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00126a1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00126a1a0, 0xc0004e2000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 6661
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2320 [chan receive, 53 minutes]:
testing.(*testContext).waitParallel(0xc00069c870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00126b1e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00126b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00126b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00126b1e0, 0xc000218c00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 6687 [select, 15 minutes]:
k8s.io/client-go/tools/watch.UntilWithoutRetry({0x378f170, 0xc000193ce0}, {0x3776d00, 0xc001882980}, {0xc00156dda8, 0x1, 0xc00187f890?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/tools/watch/until.go:73 +0x2df
k8s.io/minikube/pkg/kapi.WaitForDeploymentToStabilize({0x37c4048, 0xc001ab7dc0}, {0x2929a58, 0x7}, {0x29270e8, 0x6}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/kapi/kapi.go:125 +0x589
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc001252ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:159 +0x30a
testing.tRunner(0xc001252ea0, 0xc00199db00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2057
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2602 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2601
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6705 [select, 15 minutes]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc00055ef00, 0xc0000dfcc0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:1532 +0xa65
golang.org/x/net/http2.(*clientStream).doRequest(0xc00055ef00, 0x13?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:1410 +0x56
created by golang.org/x/net/http2.(*ClientConn).roundTrip in goroutine 6687
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:1315 +0x3d8

                                                
                                                
goroutine 883 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017a1800, 0xc0017f2460)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 882
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2340 [chan receive, 53 minutes]:
testing.(*testContext).waitParallel(0xc00069c870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00126ba00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00126ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00126ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00126ba00, 0xc000218d40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2506 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc001729f50, 0xc001729f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x80?, 0xc001729f50, 0xc001729f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0xc00126b1e0?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5a1a45?, 0xc00055ed80?, 0xc001497f80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2539
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6678 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6677
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2512 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2505 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0018f6550, 0x19)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001411d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018f6580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019f4210, {0x3767e60, 0xc001272120}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019f4210, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2539
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2507 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2506
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2561 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2560
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2548 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001a16710, 0x19)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000aead80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a16740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012c3840, {0x3767e60, 0xc001617890}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012c3840, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2549 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc0013a3f50, 0xc002042f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x60?, 0xc0013a3f50, 0xc0013a3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0xa08ff6?, 0xc001a7f500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013a3fd0?, 0x5a1aa4?, 0xc00082c460?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2550 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2549
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2576 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2575
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6708 [IO wait]:
internal/poll.runtime_pollWait(0x7f83cd2ce0a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001925a00?, 0xc001493000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001925a00, {0xc001493000, 0x3000, 0x3000})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001925a00, {0xc001493000?, 0x10?, 0xc0000b08a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0006200c8, {0xc001493000?, 0xc001493005?, 0x22?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00170a090, {0xc001493000?, 0x0?, 0xc00170a090?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc00020deb8, {0x3768660, 0xc00170a090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00020dc08, {0x7f83bc5a5650, 0xc00199e030}, 0xc0000b0a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00020dc08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc00020dc08, {0xc00141e000, 0x1000, 0xa?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc00191b140, {0xc0018882e0, 0x9, 0xc001507340?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc00191b140}, {0xc0018882e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0018882e0, 0x9, 0xa126fe?}, {0x3766900?, 0xc00191b140?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0018882a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0000b0fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00123c000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6707
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 6718 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0004c6990, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001573d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004c69c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007372d0, {0x3767e60, 0xc000844d20}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007372d0, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6689
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 6689 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004c69c0, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6687
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6905 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018f68c0, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6903
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6754 [sync.Cond.Wait, 15 minutes]:
sync.runtime_notifyListWait(0xc00055ef48, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0012e3b70?)
	/usr/local/go/src/sync/cond.go:71 +0x85
golang.org/x/net/http2.(*pipe).Read(0xc00055ef30, {0xc000929800, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/pipe.go:76 +0xd6
golang.org/x/net/http2.transportResponseBody.Read({0x471fbd?}, {0xc000929800?, 0xc001486c68?, 0x54aa24?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2637 +0x65
encoding/json.(*Decoder).refill(0xc001cbba40)
	/usr/local/go/src/encoding/json/stream.go:165 +0x188
encoding/json.(*Decoder).readValue(0xc001cbba40)
	/usr/local/go/src/encoding/json/stream.go:140 +0x85
encoding/json.(*Decoder).Decode(0xc001cbba40, {0x2519b00, 0xc00170a8e8})
	/usr/local/go/src/encoding/json/stream.go:63 +0x75
k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc000845590, {0xc001396800, 0x400, 0x400})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/framer/framer.go:151 +0x15c
k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc00043b3b0, 0x0, {0x37755f8, 0xc0018829c0})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/runtime/serializer/streaming/streaming.go:77 +0xa3
k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc00078b300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/rest/watch/decoder.go:49 +0x4b
k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc001882980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/watch/streamwatcher.go:105 +0xc7
created by k8s.io/apimachinery/pkg/watch.NewStreamWatcher in goroutine 6687
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/watch/streamwatcher.go:76 +0x105

                                                
                                                
goroutine 6639 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc000aedf50, 0xc000aedf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x0?, 0xc000aedf50, 0xc000aedf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0xa08ff6?, 0xc000003500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000095fd0?, 0x5a1aa4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6608
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6679 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019a1a80, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6677
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6638 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0019a0810, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001573d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019a0840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012a2080, {0x3767e60, 0xc001526390}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012a2080, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6608
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 6695 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc001451750, 0xc001451798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x3a?, 0xc001451750, 0xc001451798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0x6b2e6f69222c2265?, 0x6574656e72656275?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5c3a225c68746170?, 0x6f632f6374652f22?, 0x2c225c736e646572?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6679
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6607 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6606
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6608 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019a0840, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6606
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6640 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6639
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6669 [IO wait]:
internal/poll.runtime_pollWait(0x7f83bc4d7be0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001924e00?, 0xc001386000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001924e00, {0xc001386000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001924e00, {0xc001386000?, 0x10?, 0xc001a828a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0012c0a50, {0xc001386000?, 0xc00138605f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00199e120, {0xc001386000?, 0x0?, 0xc00199e120?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0006be638, {0x3768660, 0xc00199e120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0006be388, {0x7f83bc5a5650, 0xc00170a258}, 0xc001a82a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0006be388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0006be388, {0xc00138e000, 0x1000, 0xc001934fc0?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc00137d920, {0xc0006fc3c0, 0x9, 0x4cb2c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc00137d920}, {0xc0006fc3c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0006fc3c0, 0x9, 0x47b965?}, {0x3766900?, 0xc00137d920?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006fc380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001a82fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00055e900)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6668
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 6723 [select, 15 minutes]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc001815080, 0xc000664640, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:1532 +0xa65
golang.org/x/net/http2.(*clientStream).doRequest(0xc001815080, 0xc000826b60?, 0xc001527680?)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:1410 +0x56
created by golang.org/x/net/http2.(*ClientConn).roundTrip in goroutine 6722
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:1315 +0x3d8

                                                
                                                
goroutine 6704 [IO wait]:
internal/poll.runtime_pollWait(0x7f83cd2cdb80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000820200?, 0xc001386800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000820200, {0xc001386800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000820200, {0xc001386800?, 0x26012c0?, 0xc002043888?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0012c0ab8, {0xc001386800?, 0xc0020438a0?, 0x41d416?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00199e000, {0xc001386800?, 0x0?, 0xc00199e000?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0006be9b8, {0x3768660, 0xc00199e000})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0006be708, {0x7f83bc5a5650, 0xc00170a618}, 0xc002043a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0006be708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0006be708, {0xc0013d8000, 0x1000, 0x13?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0018d2b40, {0xc0006fc660, 0x9, 0xc001935180?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766900, 0xc0018d2b40}, {0xc0006fc660, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0006fc660, 0x9, 0xa126fe?}, {0x3766900?, 0xc0018d2b40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006fc620)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc002043fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00055ec00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6703
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 6688 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6687
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6719 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc001485750, 0xc001485798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x0?, 0xc001485750, 0xc001485798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0xa08ff6?, 0xc000ad1b00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014857d0?, 0x5a1aa4?, 0xc00055e900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6689
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6958 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f380, 0xc00082c0e0}, 0xc001481f50, 0xc001481f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f380, 0xc00082c0e0}, 0x0?, 0xc001481f50, 0xc001481f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f380?, 0xc00082c0e0?}, 0xa08ff6?, 0xc00055e000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00055e000?, 0x5a1aa4?, 0xc001481fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6905
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6904 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37859e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6903
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6957 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0018f6890, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001480580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aaac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018f68c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00006b220, {0x3767e60, 0xc0018e49c0}, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00006b220, 0x3b9aca00, 0x0, 0x1, 0xc00082c0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6905
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                    

Test pass (153/228)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.51
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 4.14
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 118.22
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 203.75
35 TestAddons/parallel/InspektorGadget 10.85
40 TestAddons/parallel/Headlamp 46.67
41 TestAddons/parallel/CloudSpanner 6.56
43 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/StoppedEnableDisable 93.68
47 TestCertExpiration 297.4
49 TestForceSystemdFlag 48.63
50 TestForceSystemdEnv 46.34
52 TestKVMDriverInstallOrUpdate 3.21
56 TestErrorSpam/setup 42.17
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.58
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 4.78
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.5
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 53.48
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
73 TestFunctional/serial/CacheCmd/cache/add_local 1.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.12
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 39.64
83 TestFunctional/serial/LogsCmd 1.42
84 TestFunctional/serial/LogsFileCmd 1.41
87 TestFunctional/parallel/ConfigCmd 0.3
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.91
96 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/SSHCmd 0.39
100 TestFunctional/parallel/CpCmd 1.35
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.3
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
111 TestFunctional/parallel/License 0.16
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
117 TestFunctional/parallel/ProfileCmd/profile_list 0.33
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
131 TestFunctional/parallel/MountCmd/specific-port 1.64
132 TestFunctional/parallel/Version/short 0.04
133 TestFunctional/parallel/Version/components 0.85
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
138 TestFunctional/parallel/ImageCommands/ImageBuild 4.09
139 TestFunctional/parallel/ImageCommands/Setup 0.97
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.67
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.84
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.52
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.71
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 195.07
158 TestMultiControlPlane/serial/DeployApp 4.38
159 TestMultiControlPlane/serial/PingHostFromPods 1.29
160 TestMultiControlPlane/serial/AddWorkerNode 53.01
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
163 TestMultiControlPlane/serial/CopyFile 12.68
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 75.33
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
179 TestJSONOutput/start/Command 55.89
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.7
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.35
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 84.43
211 TestMountStart/serial/StartWithMountFirst 29.77
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 28.13
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.69
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 22.32
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 110.02
223 TestMultiNode/serial/DeployApp2Nodes 4.19
224 TestMultiNode/serial/PingHostFrom2Pods 0.79
225 TestMultiNode/serial/AddNode 49.27
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7.11
229 TestMultiNode/serial/StopNode 2.29
235 TestMultiNode/serial/ValidateNameConflict 44.59
242 TestScheduledStopUnix 111.38
246 TestRunningBinaryUpgrade 235.97
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 98.7
271 TestNoKubernetes/serial/StartWithStopK8s 42.93
272 TestNoKubernetes/serial/Start 50.22
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
274 TestNoKubernetes/serial/ProfileList 28
275 TestNoKubernetes/serial/Stop 1.29
276 TestNoKubernetes/serial/StartNoArgs 21.95
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
278 TestStoppedBinaryUpgrade/Setup 0.4
279 TestStoppedBinaryUpgrade/Upgrade 122.81
281 TestPause/serial/Start 68.38
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
284 TestPause/serial/SecondStartNoReconfiguration 62.12
287 TestPause/serial/Pause 1.33
288 TestPause/serial/VerifyStatus 0.27
289 TestPause/serial/Unpause 1.18
290 TestPause/serial/PauseAgain 0.79
291 TestPause/serial/DeletePaused 1.03
292 TestPause/serial/VerifyDeletedResources 4.83
x
+
TestDownloadOnly/v1.20.0/json-events (11.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931581 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931581 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.508660333s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931581
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931581: exit status 85 (56.282171ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |          |
	|         | -p download-only-931581        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:25.849633   11215 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:25.849733   11215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:25.849744   11215 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:25.849749   11215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:25.849954   11215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	W0916 10:21:25.850092   11215 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19651-3851/.minikube/config/config.json: open /home/jenkins/minikube-integration/19651-3851/.minikube/config/config.json: no such file or directory
	I0916 10:21:25.850713   11215 out.go:352] Setting JSON to true
	I0916 10:21:25.851687   11215 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":236,"bootTime":1726481850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:25.851785   11215 start.go:139] virtualization: kvm guest
	I0916 10:21:25.854236   11215 out.go:97] [download-only-931581] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:21:25.854353   11215 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:21:25.854423   11215 notify.go:220] Checking for updates...
	I0916 10:21:25.855734   11215 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:21:25.857090   11215 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:25.858413   11215 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:25.859740   11215 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:25.860846   11215 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:21:25.863168   11215 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:21:25.863403   11215 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:21:25.968028   11215 out.go:97] Using the kvm2 driver based on user configuration
	I0916 10:21:25.968053   11215 start.go:297] selected driver: kvm2
	I0916 10:21:25.968060   11215 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:21:25.968389   11215 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:25.968507   11215 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3851/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:21:25.983731   11215 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:21:25.983787   11215 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:21:25.984348   11215 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 10:21:25.984541   11215 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:21:25.984577   11215 cni.go:84] Creating CNI manager for ""
	I0916 10:21:25.984636   11215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 10:21:25.984647   11215 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:21:25.984712   11215 start.go:340] cluster config:
	{Name:download-only-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:21:25.984957   11215 iso.go:125] acquiring lock: {Name:mk8165b793e44378487d96b0de120a258f46e187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:21:25.986983   11215 out.go:97] Downloading VM boot image ...
	I0916 10:21:25.987019   11215 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:21:29.601357   11215 out.go:97] Starting "download-only-931581" primary control-plane node in "download-only-931581" cluster
	I0916 10:21:29.601382   11215 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 10:21:29.629041   11215 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:29.629070   11215 cache.go:56] Caching tarball of preloaded images
	I0916 10:21:29.629272   11215 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 10:21:29.631141   11215 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 10:21:29.631162   11215 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:21:29.663252   11215 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 10:21:35.776156   11215 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 10:21:35.776244   11215 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19651-3851/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-931581 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931581"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-931581
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-573915 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-573915 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.139956708s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-573915
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-573915: exit status 85 (58.670804ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-931581        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| delete  | -p download-only-931581        | download-only-931581 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC | 16 Sep 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-573915 | jenkins | v1.34.0 | 16 Sep 24 10:21 UTC |                     |
	|         | -p download-only-573915        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:21:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:21:37.669092   11853 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:21:37.669240   11853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:37.669250   11853 out.go:358] Setting ErrFile to fd 2...
	I0916 10:21:37.669254   11853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:21:37.669430   11853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:21:37.669962   11853 out.go:352] Setting JSON to true
	I0916 10:21:37.670775   11853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":248,"bootTime":1726481850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:21:37.670878   11853 start.go:139] virtualization: kvm guest
	I0916 10:21:37.673112   11853 out.go:97] [download-only-573915] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:21:37.673253   11853 notify.go:220] Checking for updates...
	I0916 10:21:37.674647   11853 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:21:37.676133   11853 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:21:37.677454   11853 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:21:37.678780   11853 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:21:37.679972   11853 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-573915 host does not exist
	  To start a cluster, run: "minikube start -p download-only-573915"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-573915
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-928489 --alsologtostderr --binary-mirror http://127.0.0.1:42715 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-928489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-928489
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (118.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-650886 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-650886 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m57.203148307s)
helpers_test.go:175: Cleaning up "offline-crio-650886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-650886
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-650886: (1.01778437s)
--- PASS: TestOffline (118.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-001438
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-001438: exit status 85 (47.112026ms)

                                                
                                                
-- stdout --
	* Profile "addons-001438" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-001438"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-001438
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-001438: exit status 85 (47.361541ms)

                                                
                                                
-- stdout --
	* Profile "addons-001438" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-001438"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (203.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-001438 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-001438 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m23.754546824s)
--- PASS: TestAddons/Setup (203.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-k7c7v" [fa4d0e65-c9fb-4a8c-b461-4246b56d8b4a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00517873s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-001438
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-001438: (5.842365246s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (46.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-001438 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-cqlgq" [26984c4f-5013-4d18-8a26-82d240918898] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-cqlgq" [26984c4f-5013-4d18-8a26-82d240918898] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-cqlgq" [26984c4f-5013-4d18-8a26-82d240918898] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 40.003676637s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-001438 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-001438 addons disable headlamp --alsologtostderr -v=1: (5.744064617s)
--- PASS: TestAddons/parallel/Headlamp (46.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-58ll2" [505d8619-5fc1-4247-af75-f797558c3d45] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004022849s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-001438
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j6n9b" [83260537-f74d-40a8-bcbc-db785a97aac8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003630698s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-001438
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (93.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-001438
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-001438: (1m33.416656297s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-001438
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-001438
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-001438
--- PASS: TestAddons/StoppedEnableDisable (93.68s)

                                                
                                    
x
+
TestCertExpiration (297.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-849615 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-849615 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.474757109s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-849615 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-849615 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (55.857326222s)
helpers_test.go:175: Cleaning up "cert-expiration-849615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-849615
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-849615: (1.06183713s)
--- PASS: TestCertExpiration (297.40s)

                                                
                                    
x
+
TestForceSystemdFlag (48.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-716028 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-716028 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.632827011s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-716028 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-716028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-716028
--- PASS: TestForceSystemdFlag (48.63s)

                                                
                                    
x
+
TestForceSystemdEnv (46.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-791222 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-791222 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.34653405s)
helpers_test.go:175: Cleaning up "force-systemd-env-791222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-791222
--- PASS: TestForceSystemdEnv (46.34s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.21s)

                                                
                                    
x
+
TestErrorSpam/setup (42.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-263701 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-263701 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-263701 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-263701 --driver=kvm2  --container-runtime=crio: (42.170485366s)
error_spam_test.go:91: acceptable stderr: "E0916 10:33:33.929378   16389 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error"
--- PASS: TestErrorSpam/setup (42.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (4.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 stop: (2.295197714s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 stop: (1.358356128s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-263701 --log_dir /tmp/nospam-263701 stop: (1.121126833s)
--- PASS: TestErrorSpam/stop (4.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19651-3851/.minikube/files/etc/test/nested/copy/11203/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553844 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-553844 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.496836839s)
--- PASS: TestFunctional/serial/StartWithProxy (54.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553844 --alsologtostderr -v=8
E0916 10:35:08.820717   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:08.827817   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:08.839212   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:08.860547   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:08.901961   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:08.983493   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:09.145078   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:09.466772   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:10.108853   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:11.391154   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:13.952751   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:19.074322   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:35:29.316692   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-553844 --alsologtostderr -v=8: (53.474734205s)
functional_test.go:663: soft start took 53.475534703s for "functional-553844" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 cache add registry.k8s.io/pause:3.1: (1.082469564s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 cache add registry.k8s.io/pause:3.3: (1.161189734s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 cache add registry.k8s.io/pause:latest: (1.099535931s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-553844 /tmp/TestFunctionalserialCacheCmdcacheadd_local3394387850/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cache add minikube-local-cache-test:functional-553844
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 cache add minikube-local-cache-test:functional-553844: (1.122333888s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cache delete minikube-local-cache-test:functional-553844
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-553844
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.377809ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 cache reload: (1.035368821s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 kubectl -- --context functional-553844 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-553844 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553844 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0916 10:35:49.798455   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-553844 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.636694582s)
functional_test.go:761: restart took 39.636803304s for "functional-553844" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs: (1.424054282s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 logs --file /tmp/TestFunctionalserialLogsFileCmd1143440556/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 logs --file /tmp/TestFunctionalserialLogsFileCmd1143440556/001/logs.txt: (1.406761704s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 config get cpus: exit status 14 (51.221894ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 config get cpus: exit status 14 (48.594418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553844 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553844 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.914558ms)

                                                
                                                
-- stdout --
	* [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:36:33.854930   20625 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:33.855270   20625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:33.855285   20625 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:33.855291   20625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:33.855497   20625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:36:33.856024   20625 out.go:352] Setting JSON to false
	I0916 10:36:33.857150   20625 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1144,"bootTime":1726481850,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:33.857269   20625 start.go:139] virtualization: kvm guest
	I0916 10:36:33.859299   20625 out.go:177] * [functional-553844] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:33.860849   20625 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:33.860846   20625 notify.go:220] Checking for updates...
	I0916 10:36:33.862362   20625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:33.863613   20625 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:36:33.864922   20625 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:36:33.866067   20625 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:33.867237   20625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:33.868928   20625 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:36:33.869529   20625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:33.869614   20625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:33.888469   20625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0916 10:36:33.889108   20625 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:33.889648   20625 main.go:141] libmachine: Using API Version  1
	I0916 10:36:33.889668   20625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:33.889987   20625 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:33.890192   20625 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:33.890389   20625 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:33.890675   20625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:33.890707   20625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:33.908095   20625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0916 10:36:33.908524   20625 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:33.909008   20625 main.go:141] libmachine: Using API Version  1
	I0916 10:36:33.909036   20625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:33.909380   20625 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:33.909572   20625 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:33.942137   20625 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:36:33.943609   20625 start.go:297] selected driver: kvm2
	I0916 10:36:33.943628   20625 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:33.943789   20625 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:33.946057   20625 out.go:201] 
	W0916 10:36:33.947241   20625 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 10:36:33.948462   20625 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553844 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553844 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553844 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.584779ms)

                                                
                                                
-- stdout --
	* [functional-553844] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:36:34.139611   20738 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:34.139721   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.139734   20738 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:34.139739   20738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:34.140025   20738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 10:36:34.140533   20738 out.go:352] Setting JSON to false
	I0916 10:36:34.141585   20738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1144,"bootTime":1726481850,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:36:34.141651   20738 start.go:139] virtualization: kvm guest
	I0916 10:36:34.143781   20738 out.go:177] * [functional-553844] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:36:34.145184   20738 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:36:34.145216   20738 notify.go:220] Checking for updates...
	I0916 10:36:34.147692   20738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:34.148817   20738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	I0916 10:36:34.150295   20738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	I0916 10:36:34.151528   20738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:36:34.152665   20738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:34.154384   20738 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 10:36:34.155020   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.155078   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.171342   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0916 10:36:34.172227   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.172727   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.172781   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.173244   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.173423   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.173652   20738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:34.173932   20738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 10:36:34.173966   20738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:36:34.190306   20738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0916 10:36:34.190589   20738 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:36:34.191111   20738 main.go:141] libmachine: Using API Version  1
	I0916 10:36:34.191136   20738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:36:34.191610   20738 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:36:34.191807   20738 main.go:141] libmachine: (functional-553844) Calling .DriverName
	I0916 10:36:34.226275   20738 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 10:36:34.227529   20738 start.go:297] selected driver: kvm2
	I0916 10:36:34.227545   20738 start.go:901] validating driver "kvm2" against &{Name:functional-553844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-553844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:34.227685   20738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:34.229775   20738 out.go:201] 
	W0916 10:36:34.231185   20738 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:36:34.232371   20738 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh -n functional-553844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cp functional-553844:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1484377852/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh -n functional-553844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh -n functional-553844 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11203/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /etc/test/nested/copy/11203/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11203.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /etc/ssl/certs/11203.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11203.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /usr/share/ca-certificates/11203.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/112032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /etc/ssl/certs/112032.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/112032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /usr/share/ca-certificates/112032.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "sudo systemctl is-active docker": exit status 1 (212.466499ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "sudo systemctl is-active containerd": exit status 1 (201.801529ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "263.752825ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "62.855067ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "281.292701ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.429505ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdspecific-port2665835366/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.028273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T /mount-9p | grep 9p"
E0916 10:36:30.759894   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdspecific-port2665835366/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "sudo umount -f /mount-9p": exit status 1 (196.665073ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-553844 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdspecific-port2665835366/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553844 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-553844
localhost/kicbase/echo-server:functional-553844
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553844 image ls --format short --alsologtostderr:
I0916 10:36:40.302587   21441 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:40.302802   21441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.302817   21441 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:40.302826   21441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.303200   21441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:40.307076   21441 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.307222   21441 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.307806   21441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.307854   21441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.324497   21441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
I0916 10:36:40.325084   21441 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.325879   21441 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.325907   21441 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.326351   21441 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.326559   21441 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:40.328845   21441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.328882   21441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.347164   21441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34975
I0916 10:36:40.347615   21441 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.348106   21441 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.348123   21441 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.348557   21441 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.348730   21441 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:40.348977   21441 ssh_runner.go:195] Run: systemctl --version
I0916 10:36:40.349006   21441 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:40.352613   21441 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.353264   21441 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:40.353335   21441 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.353642   21441 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:40.353843   21441 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:40.354011   21441 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:40.354137   21441 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:40.485156   21441 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 10:36:40.635879   21441 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.635900   21441 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.636218   21441 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.636235   21441 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:40.636249   21441 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.636256   21441 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.636534   21441 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.636549   21441 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553844 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-553844  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/minikube-local-cache-test     | functional-553844  | fdddd0a5c43f6 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553844 image ls --format table --alsologtostderr:
I0916 10:36:40.663632   21515 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:40.663732   21515 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.663742   21515 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:40.663749   21515 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.664029   21515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:40.664823   21515 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.664983   21515 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.665546   21515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.665601   21515 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.680381   21515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
I0916 10:36:40.680855   21515 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.681414   21515 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.681433   21515 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.681751   21515 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.681881   21515 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:40.683493   21515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.683529   21515 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.698462   21515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
I0916 10:36:40.698882   21515 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.699509   21515 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.699535   21515 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.700048   21515 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.700229   21515 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:40.700499   21515 ssh_runner.go:195] Run: systemctl --version
I0916 10:36:40.700533   21515 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:40.703274   21515 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.703621   21515 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:40.703643   21515 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.703812   21515 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:40.703955   21515 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:40.704037   21515 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:40.704133   21515 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:40.837433   21515 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 10:36:40.914053   21515 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.914072   21515 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.914331   21515 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.914346   21515 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:40.914366   21515 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.914374   21515 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.914617   21515 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
I0916 10:36:40.914645   21515 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.914667   21515 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553844 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee1
21b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"fdddd0a5c43f6da84d1811f8bc35c32b84e325edbde25e3695f94901fa7e2431","repoDigests":["localhost/minikube-local-cache-test@sha256:fb9f4f27feebdc9ccdea50ce40231a4a720c1a9c4c
460321138b1a6450f70807"],"repoTags":["localhost/minikube-local-cache-test:functional-553844"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","re
gistry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-553844"],"size":"4943877"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":[
"docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553844 image ls --format json --alsologtostderr:
I0916 10:36:40.306785   21440 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:40.306913   21440 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.306926   21440 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:40.306932   21440 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.307197   21440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:40.307967   21440 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.308115   21440 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.308583   21440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.308639   21440 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.326668   21440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
I0916 10:36:40.327116   21440 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.327792   21440 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.327821   21440 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.328172   21440 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.328486   21440 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:40.330363   21440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.330401   21440 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.346560   21440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
I0916 10:36:40.347004   21440 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.347525   21440 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.347543   21440 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.347939   21440 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.348135   21440 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:40.348337   21440 ssh_runner.go:195] Run: systemctl --version
I0916 10:36:40.348402   21440 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:40.352240   21440 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.352614   21440 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:40.352632   21440 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.352828   21440 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:40.352981   21440 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:40.353163   21440 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:40.353318   21440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:40.479193   21440 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 10:36:40.601809   21440 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.601831   21440 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.602121   21440 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.602137   21440 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:40.602152   21440 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.602160   21440 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.603703   21440 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.603722   21440 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:40.603720   21440 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553844 image ls --format yaml --alsologtostderr:
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-553844
size: "4943877"
- id: fdddd0a5c43f6da84d1811f8bc35c32b84e325edbde25e3695f94901fa7e2431
repoDigests:
- localhost/minikube-local-cache-test@sha256:fb9f4f27feebdc9ccdea50ce40231a4a720c1a9c4c460321138b1a6450f70807
repoTags:
- localhost/minikube-local-cache-test:functional-553844
size: "3330"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553844 image ls --format yaml --alsologtostderr:
I0916 10:36:40.308499   21439 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:40.308637   21439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.308647   21439 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:40.308653   21439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.308944   21439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:40.309769   21439 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.309922   21439 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.310447   21439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.310500   21439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.324741   21439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
I0916 10:36:40.325118   21439 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.325793   21439 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.325817   21439 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.326209   21439 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.326491   21439 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:40.329267   21439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.329311   21439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.344662   21439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
I0916 10:36:40.345154   21439 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.345779   21439 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.345804   21439 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.346195   21439 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.346371   21439 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:40.346551   21439 ssh_runner.go:195] Run: systemctl --version
I0916 10:36:40.346579   21439 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:40.350710   21439 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.351310   21439 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:40.351338   21439 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.351470   21439 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:40.351621   21439 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:40.351733   21439 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:40.351825   21439 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:40.481760   21439 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 10:36:40.582186   21439 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.582203   21439 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.582531   21439 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
I0916 10:36:40.582586   21439 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.582596   21439 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:40.582614   21439 main.go:141] libmachine: Making call to close driver server
I0916 10:36:40.582624   21439 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:40.582873   21439 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
I0916 10:36:40.583014   21439 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:40.583055   21439 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh pgrep buildkitd: exit status 1 (239.5032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image build -t localhost/my-image:functional-553844 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 image build -t localhost/my-image:functional-553844 testdata/build --alsologtostderr: (3.63977427s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553844 image build -t localhost/my-image:functional-553844 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a754b3919d6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-553844
--> 00404940d4f
Successfully tagged localhost/my-image:functional-553844
00404940d4fda6dcff700308d645fce701948f2320995ec5fe9c43ffa89abc2a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553844 image build -t localhost/my-image:functional-553844 testdata/build --alsologtostderr:
I0916 10:36:40.879566   21555 out.go:345] Setting OutFile to fd 1 ...
I0916 10:36:40.879718   21555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.879728   21555 out.go:358] Setting ErrFile to fd 2...
I0916 10:36:40.879732   21555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:36:40.879934   21555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
I0916 10:36:40.880490   21555 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.881075   21555 config.go:182] Loaded profile config "functional-553844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 10:36:40.881507   21555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.881547   21555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.896470   21555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
I0916 10:36:40.897004   21555 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.897512   21555 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.897536   21555 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.897863   21555 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.898046   21555 main.go:141] libmachine: (functional-553844) Calling .GetState
I0916 10:36:40.899854   21555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 10:36:40.899895   21555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:36:40.915312   21555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
I0916 10:36:40.915757   21555 main.go:141] libmachine: () Calling .GetVersion
I0916 10:36:40.916296   21555 main.go:141] libmachine: Using API Version  1
I0916 10:36:40.916323   21555 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:36:40.916637   21555 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:36:40.916794   21555 main.go:141] libmachine: (functional-553844) Calling .DriverName
I0916 10:36:40.916982   21555 ssh_runner.go:195] Run: systemctl --version
I0916 10:36:40.917010   21555 main.go:141] libmachine: (functional-553844) Calling .GetSSHHostname
I0916 10:36:40.920000   21555 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.920457   21555 main.go:141] libmachine: (functional-553844) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3a:6f", ip: ""} in network mk-functional-553844: {Iface:virbr1 ExpiryTime:2024-09-16 11:33:58 +0000 UTC Type:0 Mac:52:54:00:9d:3a:6f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:functional-553844 Clientid:01:52:54:00:9d:3a:6f}
I0916 10:36:40.920567   21555 main.go:141] libmachine: (functional-553844) DBG | domain functional-553844 has defined IP address 192.168.39.230 and MAC address 52:54:00:9d:3a:6f in network mk-functional-553844
I0916 10:36:40.920743   21555 main.go:141] libmachine: (functional-553844) Calling .GetSSHPort
I0916 10:36:40.920900   21555 main.go:141] libmachine: (functional-553844) Calling .GetSSHKeyPath
I0916 10:36:40.921047   21555 main.go:141] libmachine: (functional-553844) Calling .GetSSHUsername
I0916 10:36:40.921174   21555 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/functional-553844/id_rsa Username:docker}
I0916 10:36:41.015379   21555 build_images.go:161] Building image from path: /tmp/build.1988420356.tar
I0916 10:36:41.015440   21555 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 10:36:41.030852   21555 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1988420356.tar
I0916 10:36:41.038908   21555 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1988420356.tar: stat -c "%s %y" /var/lib/minikube/build/build.1988420356.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1988420356.tar': No such file or directory
I0916 10:36:41.038947   21555 ssh_runner.go:362] scp /tmp/build.1988420356.tar --> /var/lib/minikube/build/build.1988420356.tar (3072 bytes)
I0916 10:36:41.098337   21555 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1988420356
I0916 10:36:41.121165   21555 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1988420356 -xf /var/lib/minikube/build/build.1988420356.tar
I0916 10:36:41.134735   21555 crio.go:315] Building image: /var/lib/minikube/build/build.1988420356
I0916 10:36:41.134820   21555 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-553844 /var/lib/minikube/build/build.1988420356 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0916 10:36:44.440888   21555 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-553844 /var/lib/minikube/build/build.1988420356 --cgroup-manager=cgroupfs: (3.306036349s)
I0916 10:36:44.440955   21555 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1988420356
I0916 10:36:44.453673   21555 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1988420356.tar
I0916 10:36:44.463749   21555 build_images.go:217] Built localhost/my-image:functional-553844 from /tmp/build.1988420356.tar
I0916 10:36:44.463781   21555 build_images.go:133] succeeded building to: functional-553844
I0916 10:36:44.463788   21555 build_images.go:134] failed building to: 
I0916 10:36:44.463806   21555 main.go:141] libmachine: Making call to close driver server
I0916 10:36:44.463818   21555 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:44.464078   21555 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
I0916 10:36:44.464096   21555 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:44.464107   21555 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:36:44.464121   21555 main.go:141] libmachine: Making call to close driver server
I0916 10:36:44.464135   21555 main.go:141] libmachine: (functional-553844) Calling .Close
I0916 10:36:44.464337   21555 main.go:141] libmachine: (functional-553844) DBG | Closing plugin on server side
I0916 10:36:44.464399   21555 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:36:44.464422   21555 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-553844
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T" /mount1: exit status 1 (239.123971ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-553844 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553844 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2311539262/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image load --daemon kicbase/echo-server:functional-553844 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 image load --daemon kicbase/echo-server:functional-553844 --alsologtostderr: (1.419964472s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image load --daemon kicbase/echo-server:functional-553844 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-553844
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image load --daemon kicbase/echo-server:functional-553844 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image save kicbase/echo-server:functional-553844 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image rm kicbase/echo-server:functional-553844 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-553844 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.259147295s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-553844
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-553844 image save --daemon kicbase/echo-server:functional-553844 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-553844
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-553844
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-553844
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-553844
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-244475 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 10:40:08.820783   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:40:36.523639   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-244475 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.395442455s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- rollout status deployment/busybox
E0916 10:41:28.278316   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.284723   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.296182   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.317635   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.359057   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.440511   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.601995   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:28.923829   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-244475 -- rollout status deployment/busybox: (2.028915608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0916 10:41:29.565620   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-7bhqg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-d4m5s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-t6fmb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-7bhqg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-d4m5s -- nslookup kubernetes.default
E0916 10:41:30.847093   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-t6fmb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-7bhqg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-d4m5s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-t6fmb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-7bhqg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-7bhqg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-d4m5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-d4m5s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-t6fmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-244475 -- exec busybox-7dff88458-t6fmb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-244475 -v=7 --alsologtostderr
E0916 10:41:33.408621   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:38.530835   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:41:48.772271   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:09.254423   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-244475 -v=7 --alsologtostderr: (52.167952519s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp testdata/cp-test.txt ha-244475:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475:/home/docker/cp-test.txt ha-244475-m02:/home/docker/cp-test_ha-244475_ha-244475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test_ha-244475_ha-244475-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475:/home/docker/cp-test.txt ha-244475-m03:/home/docker/cp-test_ha-244475_ha-244475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test_ha-244475_ha-244475-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475:/home/docker/cp-test.txt ha-244475-m04:/home/docker/cp-test_ha-244475_ha-244475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test_ha-244475_ha-244475-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp testdata/cp-test.txt ha-244475-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m02:/home/docker/cp-test.txt ha-244475:/home/docker/cp-test_ha-244475-m02_ha-244475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test_ha-244475-m02_ha-244475.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m02:/home/docker/cp-test.txt ha-244475-m03:/home/docker/cp-test_ha-244475-m02_ha-244475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test_ha-244475-m02_ha-244475-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m02:/home/docker/cp-test.txt ha-244475-m04:/home/docker/cp-test_ha-244475-m02_ha-244475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test_ha-244475-m02_ha-244475-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp testdata/cp-test.txt ha-244475-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt ha-244475:/home/docker/cp-test_ha-244475-m03_ha-244475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test_ha-244475-m03_ha-244475.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt ha-244475-m02:/home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test_ha-244475-m03_ha-244475-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m03:/home/docker/cp-test.txt ha-244475-m04:/home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test_ha-244475-m03_ha-244475-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp testdata/cp-test.txt ha-244475-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630339340/001/cp-test_ha-244475-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt ha-244475:/home/docker/cp-test_ha-244475-m04_ha-244475.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475 "sudo cat /home/docker/cp-test_ha-244475-m04_ha-244475.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt ha-244475-m02:/home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m02 "sudo cat /home/docker/cp-test_ha-244475-m04_ha-244475-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 cp ha-244475-m04:/home/docker/cp-test.txt ha-244475-m03:/home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 ssh -n ha-244475-m03 "sudo cat /home/docker/cp-test_ha-244475-m04_ha-244475-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.478112902s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-244475 --control-plane -v=7 --alsologtostderr
E0916 11:00:08.821230   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-244475 --control-plane -v=7 --alsologtostderr: (1m14.489616979s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-244475 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-442974 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0916 11:01:28.278387   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-442974 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.884516481s)
--- PASS: TestJSONOutput/start/Command (55.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-442974 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-442974 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-442974 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-442974 --output=json --user=testUser: (7.351162738s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-399930 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-399930 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.105576ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82f1e887-2a95-41cc-9612-fcc1cafd76db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-399930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1966f1a-969a-4082-b914-d41a0bfab5c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"7e7de6d1-81d8-48c1-a6da-f202c4c4a008","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1db8e935-9867-4e07-a611-6327d3e2c021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig"}}
	{"specversion":"1.0","id":"31c39d48-4b2f-40dc-9509-f22f9b1797ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube"}}
	{"specversion":"1.0","id":"5c041770-e8da-4621-a030-78d7e9a94dad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ebb65fa7-962d-4686-8eca-630a9b4c6c72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"457f30c7-4721-4edc-8e8c-a26e2dd550eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-399930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-399930
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-453817 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-453817 --driver=kvm2  --container-runtime=crio: (40.097397965s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-466865 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-466865 --driver=kvm2  --container-runtime=crio: (41.684496599s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-453817
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-466865
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-466865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-466865
helpers_test.go:175: Cleaning up "first-453817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-453817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-453817: (1.002772793s)
--- PASS: TestMinikubeProfile (84.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-777774 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-777774 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.772242963s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-777774 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-777774 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-789477 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-789477 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.132958364s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-789477 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-789477 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-777774 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-789477 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-789477 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-789477
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-789477: (1.271651318s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-789477
E0916 11:05:08.820948   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-789477: (21.317498186s)
--- PASS: TestMountStart/serial/RestartStopped (22.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-789477 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-789477 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736061 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 11:06:28.278047   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736061 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.622079009s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-736061 -- rollout status deployment/busybox: (2.661517178s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-754d4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-g9fqk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-754d4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-g9fqk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-754d4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-g9fqk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-754d4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-754d4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-g9fqk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736061 -- exec busybox-7dff88458-g9fqk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-736061 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-736061 -v 3 --alsologtostderr: (48.703223076s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.27s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp testdata/cp-test.txt multinode-736061:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061:/home/docker/cp-test.txt multinode-736061-m02:/home/docker/cp-test_multinode-736061_multinode-736061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m02 "sudo cat /home/docker/cp-test_multinode-736061_multinode-736061-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061:/home/docker/cp-test.txt multinode-736061-m03:/home/docker/cp-test_multinode-736061_multinode-736061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m03 "sudo cat /home/docker/cp-test_multinode-736061_multinode-736061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp testdata/cp-test.txt multinode-736061-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt multinode-736061:/home/docker/cp-test_multinode-736061-m02_multinode-736061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061 "sudo cat /home/docker/cp-test_multinode-736061-m02_multinode-736061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061-m02:/home/docker/cp-test.txt multinode-736061-m03:/home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m03 "sudo cat /home/docker/cp-test_multinode-736061-m02_multinode-736061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp testdata/cp-test.txt multinode-736061-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1886615299/001/cp-test_multinode-736061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt multinode-736061:/home/docker/cp-test_multinode-736061-m03_multinode-736061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061 "sudo cat /home/docker/cp-test_multinode-736061-m03_multinode-736061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 cp multinode-736061-m03:/home/docker/cp-test.txt multinode-736061-m02:/home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 ssh -n multinode-736061-m02 "sudo cat /home/docker/cp-test_multinode-736061-m03_multinode-736061-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-736061 node stop m03: (1.453239659s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736061 status: exit status 7 (421.783422ms)

                                                
                                                
-- stdout --
	multinode-736061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-736061-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-736061-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736061 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736061 status --alsologtostderr: exit status 7 (417.925276ms)

                                                
                                                
-- stdout --
	multinode-736061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-736061-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-736061-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:08:09.896618   39128 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:08:09.896741   39128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:09.896750   39128 out.go:358] Setting ErrFile to fd 2...
	I0916 11:08:09.896755   39128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:09.896947   39128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3851/.minikube/bin
	I0916 11:08:09.897107   39128 out.go:352] Setting JSON to false
	I0916 11:08:09.897152   39128 mustload.go:65] Loading cluster: multinode-736061
	I0916 11:08:09.897260   39128 notify.go:220] Checking for updates...
	I0916 11:08:09.897531   39128 config.go:182] Loaded profile config "multinode-736061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 11:08:09.897547   39128 status.go:255] checking status of multinode-736061 ...
	I0916 11:08:09.897946   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:09.897999   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:09.916959   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I0916 11:08:09.917440   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:09.918048   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:09.918077   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:09.918435   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:09.918635   39128 main.go:141] libmachine: (multinode-736061) Calling .GetState
	I0916 11:08:09.920072   39128 status.go:330] multinode-736061 host status = "Running" (err=<nil>)
	I0916 11:08:09.920088   39128 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:08:09.920370   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:09.920402   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:09.935688   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0916 11:08:09.936134   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:09.936571   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:09.936593   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:09.936943   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:09.937160   39128 main.go:141] libmachine: (multinode-736061) Calling .GetIP
	I0916 11:08:09.939631   39128 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:08:09.940012   39128 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:08:09.940044   39128 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:08:09.940153   39128 host.go:66] Checking if "multinode-736061" exists ...
	I0916 11:08:09.940437   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:09.940471   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:09.955335   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0916 11:08:09.955719   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:09.956172   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:09.956194   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:09.956557   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:09.956727   39128 main.go:141] libmachine: (multinode-736061) Calling .DriverName
	I0916 11:08:09.956893   39128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:08:09.956926   39128 main.go:141] libmachine: (multinode-736061) Calling .GetSSHHostname
	I0916 11:08:09.959583   39128 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:08:09.959973   39128 main.go:141] libmachine: (multinode-736061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:52:21", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:05:28 +0000 UTC Type:0 Mac:52:54:00:c1:52:21 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-736061 Clientid:01:52:54:00:c1:52:21}
	I0916 11:08:09.959992   39128 main.go:141] libmachine: (multinode-736061) DBG | domain multinode-736061 has defined IP address 192.168.39.32 and MAC address 52:54:00:c1:52:21 in network mk-multinode-736061
	I0916 11:08:09.960107   39128 main.go:141] libmachine: (multinode-736061) Calling .GetSSHPort
	I0916 11:08:09.960273   39128 main.go:141] libmachine: (multinode-736061) Calling .GetSSHKeyPath
	I0916 11:08:09.960401   39128 main.go:141] libmachine: (multinode-736061) Calling .GetSSHUsername
	I0916 11:08:09.960538   39128 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061/id_rsa Username:docker}
	I0916 11:08:10.041447   39128 ssh_runner.go:195] Run: systemctl --version
	I0916 11:08:10.047756   39128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:08:10.064173   39128 kubeconfig.go:125] found "multinode-736061" server: "https://192.168.39.32:8443"
	I0916 11:08:10.064206   39128 api_server.go:166] Checking apiserver status ...
	I0916 11:08:10.064235   39128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:08:10.078441   39128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup
	W0916 11:08:10.088180   39128 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:08:10.088232   39128 ssh_runner.go:195] Run: ls
	I0916 11:08:10.093115   39128 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0916 11:08:10.097012   39128 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0916 11:08:10.097037   39128 status.go:422] multinode-736061 apiserver status = Running (err=<nil>)
	I0916 11:08:10.097046   39128 status.go:257] multinode-736061 status: &{Name:multinode-736061 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 11:08:10.097062   39128 status.go:255] checking status of multinode-736061-m02 ...
	I0916 11:08:10.097393   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:10.097431   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:10.112608   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
	I0916 11:08:10.113097   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:10.113642   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:10.113665   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:10.113945   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:10.114129   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .GetState
	I0916 11:08:10.115573   39128 status.go:330] multinode-736061-m02 host status = "Running" (err=<nil>)
	I0916 11:08:10.115587   39128 host.go:66] Checking if "multinode-736061-m02" exists ...
	I0916 11:08:10.115869   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:10.115914   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:10.130898   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
	I0916 11:08:10.131308   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:10.131738   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:10.131760   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:10.132070   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:10.132253   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .GetIP
	I0916 11:08:10.135088   39128 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:08:10.135562   39128 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:08:10.135595   39128 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:08:10.135775   39128 host.go:66] Checking if "multinode-736061-m02" exists ...
	I0916 11:08:10.136088   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:10.136141   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:10.151239   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0916 11:08:10.151623   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:10.152081   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:10.152104   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:10.152428   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:10.152621   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .DriverName
	I0916 11:08:10.152789   39128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:08:10.152805   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHHostname
	I0916 11:08:10.155305   39128 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:08:10.155644   39128 main.go:141] libmachine: (multinode-736061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:7f:3f", ip: ""} in network mk-multinode-736061: {Iface:virbr1 ExpiryTime:2024-09-16 12:06:28 +0000 UTC Type:0 Mac:52:54:00:ab:7f:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-736061-m02 Clientid:01:52:54:00:ab:7f:3f}
	I0916 11:08:10.155666   39128 main.go:141] libmachine: (multinode-736061-m02) DBG | domain multinode-736061-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:ab:7f:3f in network mk-multinode-736061
	I0916 11:08:10.155845   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHPort
	I0916 11:08:10.155968   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHKeyPath
	I0916 11:08:10.156119   39128 main.go:141] libmachine: (multinode-736061-m02) Calling .GetSSHUsername
	I0916 11:08:10.156205   39128 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3851/.minikube/machines/multinode-736061-m02/id_rsa Username:docker}
	I0916 11:08:10.240508   39128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:08:10.253845   39128 status.go:257] multinode-736061-m02 status: &{Name:multinode-736061-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 11:08:10.253886   39128 status.go:255] checking status of multinode-736061-m03 ...
	I0916 11:08:10.254247   39128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 11:08:10.254288   39128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:08:10.269826   39128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41649
	I0916 11:08:10.270239   39128 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:08:10.270744   39128 main.go:141] libmachine: Using API Version  1
	I0916 11:08:10.270766   39128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:08:10.271068   39128 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:08:10.271255   39128 main.go:141] libmachine: (multinode-736061-m03) Calling .GetState
	I0916 11:08:10.272744   39128 status.go:330] multinode-736061-m03 host status = "Stopped" (err=<nil>)
	I0916 11:08:10.272761   39128 status.go:343] host is not running, skipping remaining checks
	I0916 11:08:10.272769   39128 status.go:257] multinode-736061-m03 status: &{Name:multinode-736061-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-736061
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736061-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-736061-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.23931ms)

                                                
                                                
-- stdout --
	* [multinode-736061-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-736061-m02' is duplicated with machine name 'multinode-736061-m02' in profile 'multinode-736061'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736061-m03 --driver=kvm2  --container-runtime=crio
E0916 11:20:08.821177   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736061-m03 --driver=kvm2  --container-runtime=crio: (43.294609841s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-736061
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-736061: exit status 80 (206.754946ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-736061 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-736061-m03 already exists in multinode-736061-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-736061-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.59s)

                                                
                                    
x
+
TestScheduledStopUnix (111.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-373139 --memory=2048 --driver=kvm2  --container-runtime=crio
E0916 11:25:08.821203   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-373139 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.816048277s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373139 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-373139 -n scheduled-stop-373139
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373139 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373139 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-373139 -n scheduled-stop-373139
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-373139
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373139 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0916 11:26:28.277920   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-373139
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-373139: exit status 7 (60.579771ms)

                                                
                                                
-- stdout --
	scheduled-stop-373139
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-373139 -n scheduled-stop-373139
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-373139 -n scheduled-stop-373139: exit status 7 (64.354304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-373139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-373139
--- PASS: TestScheduledStopUnix (111.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (235.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3059635926 start -p running-upgrade-682717 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3059635926 start -p running-upgrade-682717 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.861613356s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-682717 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-682717 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m44.342286747s)
helpers_test.go:175: Cleaning up "running-upgrade-682717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-682717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-682717: (1.13731856s)
--- PASS: TestRunningBinaryUpgrade (235.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668924 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-668924 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.096807ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-668924] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3851/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3851/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668924 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668924 --driver=kvm2  --container-runtime=crio: (1m38.443962633s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-668924 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668924 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668924 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.627022625s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-668924 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-668924 status -o json: exit status 2 (260.994125ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-668924","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-668924
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-668924: (1.038051916s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668924 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668924 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.224891798s)
--- PASS: TestNoKubernetes/serial/Start (50.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-668924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-668924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.377543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0916 11:30:08.820305   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/addons-001438/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.318372229s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.681198233s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-668924
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-668924: (1.293804776s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668924 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668924 --driver=kvm2  --container-runtime=crio: (21.954543283s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-668924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-668924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.894683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2009127105 start -p stopped-upgrade-153123 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0916 11:31:11.347342   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:31:28.278372   11203 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3851/.minikube/profiles/functional-553844/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2009127105 start -p stopped-upgrade-153123 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m19.210933833s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2009127105 -p stopped-upgrade-153123 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2009127105 -p stopped-upgrade-153123 stop: (2.134488765s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-153123 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-153123 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.465861973s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.81s)

                                                
                                    
x
+
TestPause/serial/Start (68.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-902210 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-902210 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m8.381317545s)
--- PASS: TestPause/serial/Start (68.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-153123
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-902210 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-902210 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.094533834s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (62.12s)

                                                
                                    
x
+
TestPause/serial/Pause (1.33s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-902210 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-902210 --alsologtostderr -v=5: (1.327678698s)
--- PASS: TestPause/serial/Pause (1.33s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-902210 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-902210 --output=json --layout=cluster: exit status 2 (265.177625ms)

                                                
                                                
-- stdout --
	{"Name":"pause-902210","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-902210","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-902210 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-902210 --alsologtostderr -v=5: (1.175828656s)
--- PASS: TestPause/serial/Unpause (1.18s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-902210 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-902210 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-902210 --alsologtostderr -v=5: (1.032094305s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.828715892s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.83s)

                                                
                                    

Test skip (34/228)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard